Показать содержимое по тегу: AI

Суббота, 17 марта 2018 01:47

Xiaomi Releases The World’s First AI Range Hood Dubbed EyeBot

Chinese manufacturing giant, Xiaomi, recently attended the 2018 China Home Appliances and Consumer Electronics Expo (AWE) where its Yunmi sub-brand released the world’s first Artificial Intelligence (AI) range hood – Yunmi AI hood, dubbed EyeBot. The Yunmi AI hood is equipped with the latest artificial intelligence system, built-in fume identification camera, and it uses AI image recognition technology to execute a dynamic fume tracking.


As long as the fire is turned on, the hood will automatically open and the wind moves with the smoke. The smoke is large, and the small breeze is small. You don’t need to worry about cameras being damaged by soot. EyeBot’s AI smart algorithm will automatically detect and intelligently remind you to clean regularly.


At the same time, Yunmi uses the spatial noise reduction technology on the hood for the first time, and the suction can also allow you to enjoy a quiet kitchen.


С конца прошлого года у пользователей ВКонтакте начал появляться значок огня на личной странице или в сообществе. В соцсети заработал новый алгоритм поиска талантов «Прометей». Мы опросили блогеров, которые успели оценить на себе действие алгоритма, и выяснили, чем он полезен авторам.


Что такое «Прометей»

«Прометей» — механика поиска талантов, которая появилась во ВКонтакте в сентябре 2017 года. Искусственный интеллект находит создателей интересного контента. Выделяет их на 7 дней значком огня, который появляется рядом с именем или названием сообщества. Алгоритм помогает получить больше внимания и новых подписчиков. По данным на начало февраля, «Прометей» отметил значком более 4000 авторов.


Как «Прометей» привлекает внимание к авторам? Страницы, которые выбрал искусственный интеллект, получают повышенный охват в разделе «Рекомендации». Раздел посещает каждый третий пользователь соцсети. Авторы с «огоньком» оказываются в центре внимания и получают новых подписчиков.


Кого выбирает «Прометей»? Чтобы алгоритм заметил автора, ему необязательно иметь популярную страницу. Среди отмеченных знаком огня есть сообщества с сотней подписчиков и многотысячные гиганты. Считается, что «Прометей» выделяет творческие сообщества: писателей, иллюстраторов, фотографов. На самом деле, если пользователь делает увлекательный контент, скажем, о жизни в коммуналке, то алгоритм тоже поможет ему найти больше подписчиков.


Как получить огонь «Прометея»? Социальная сеть поддерживает активных пользователей, которые делают интересный авторский контент. Пишите, снимайте видео, рисуйте или фотографируйте что вам нравится. Будьте собой, но не забывайте об оформлении страницы – оно должно соответствовать правилам сайта. И немного терпения – талантов много, а алгоритм один.


Какой эффект дает «огонек»

Многие авторы, чьи страницы отмечает «Прометей», замечают эффект от действия алгоритма почти сразу. Помимо визуального символа в личные сообщения приходит пояснение от «Прометея» и резко возрастает охват аудитории у постов.


Как использовать отметку «Прометея»

С увеличением охвата важно не прекращать обновления и продолжать радовать читателей интересными постами — так, как вы умеете это делать.


В чем польза для блогера

Опрошенные авторы и создатели сообществ отмечают, что «Прометей» позволил ВКонтакте предложить аудитории «свежую кровь» — новых, молодых авторов, которых люди не знают. Раньше эти люди не могли достучаться до читателей из-за главенства в ленте постов крупнейших пабликов.

Вторник, 06 марта 2018 23:59

Google Drive getting smarter thanks to AI revamp

Google has announced a new interface that uses artificial intelligence to improve the way users find files in Google Drive.


If you use G Suite, whether at home or at work, you likely have access to a ton of files with multiple owners. Thankfully, Google has announced that Drive will now start intelligently organizing the “Shared With Me” section using AI so that file owners will be listed alongside the files they have shared.


Screenshots of the new layout released by Google show Quick Access columns in the top third of the screen with files organized underneath their respective owners, as well as a new general “Shared By” tag for each file in Drive. On mobile, it seems like a similar Quick Access menu will be shown above the recently opened files list.


According to Google, searching for content based on the owner of the file is the most popular way to search for content. Google says it will use artificial intelligence to predict the people and files you are most likely to search for and make them more visible.


The changes are expected to start appearing within the next two weeks. Google also said it is currently working on even more features that use artificial intelligence. These are expected to be announced over the next few months.


Четверг, 22 февраля 2018 02:22

Facebook opens registration for its annual F8 developer conference

Facebook has just opened up registration for its annual F8 developer conference.


Last year, Zuck and company talked about augmented reality, brain-computer interfaces and a complete rewrite of its React framework. Facebook has had a turbulent year in the news, and the company has made some significant moves to restructure how its users are interacting with its products, so expect to hear a lot more details about the evolving future of the platform.


Also expect to hear some of this from Zuckerberg himself and Facebook’s other key leaders. The company will be hosting sessions spanning key topics like “AR, VR, AI, open source, community, social good, growing your business and more,” according to the company. In a blog post highlighting some of the familiar details of the conference, FB VP of Platform Partnerships Ime Archibong detailed that a focus of the event will be to “explore new ways we can build community and bring the world closer together.”


In terms of logistics, the conference will again return to the McEnery Convention Center in San Jose on May 1 and 2. Tickets are $595. You can register for F8 here. As in the past, Facebook will make available live streams of the keynotes, and we will — of course — be covering the news emerging from it with hawkish eyes.

Recently Android Authority named the HUAWEI Mate 10 Pro the best Android flagship of 2017, but what makes it the best? Great design and top notch specifications are certainly part of the phone’s winning formula. Another key aspect? This magic little Kirin 970 chip that lets the HUAWEI Mate 10 Pro and the HUAWEI Mate 10 do things the competition can’t.


This chip is unlike others because it has a dedicated NPU that can improve many aspects of your smartphone experience, including the quality of the images you take.


Next level NPU

The NPU stands for Neural Network Processing Unit. It allows the HUAWEI Mate 10 Series to perform AI-related tasks directly on the phone. Android Authority previously explained how NPUs work in great detail, but the short of it is that an onboard NPU helps the HUAWEI Mate Series Pro quickly and intelligently understand certain user behavior patterns to improve various parts of the user experience, all without needing to access the Internet.


Because the HUAWEI Mate 10 Series doesn’t rely on the cloud for these special AI tasks, it has much lower latency than offloading to an external AI processor. It’s much more efficient at AI tasks than flagships from companies like Samsung, LG, and Apple, all of which rely on remote servers for AI related tasks.


One area where the NPU particularly shines is photography. Combine this NPU with the HUAWEI Mate 10 Series’ 12-megapixel RGB + 20-megapixel monochrome dual sensor setup and you have a recipe for some great photos.


Getting real

The NPU on the Kirin 970 allows for intelligent photo processing in real time. Huawei taught the AI engine to recognize 13 different scenes — text, food, performance (theatre, etc.), blue sky, snow, beach, dogs, cats, nightscape, sunrise/sunset, plants, portraits and flowers. These 13 scenes arguably encompass the majority of photos average consumers will take on a daily basis.


With all this information, the NPU adjusts basically everything on the camera from focus, focal length, brightness, contrast, and color to give you the perfect settings for any scene. The viewfinder for the camera will even show an icon letting the user know what scene it detects, giving users the assurance that the camera is doing what is needed.


Thanks to the NPU, the camera can also process language translation in real time. This is done completely on the phone itself, making the transition seamless, and requiring minimal battery power. Simply hold your phone up to a sign, or a paper written in the language you want to translate, and you’ll see the translated text superimposed over the image, as fast as you can scan it. It will eliminate translation issues the next time you travel abroad.


The Kirin 970 further enhances the camera experience with its dual ISP (image signal processor), which allow it to process images, data, and light information faster.


All these little touches make for a smarter camera experience. Even novice users can create nearly DSLR-level results. Of course, there’s also still a manual mode for those that have mad photography skills already.


Night and day

One great example of how AI makes your image experience better is the camera’s ability to differentiate between a snowy background and an overcast sky. Both are similar colors, but they each require different settings to get the best photo possible. The NPU is capable of processing up to 2000 photos per second. It uses that speed to adjust settings as needed and learn what each different scene looks like, so the next batch of photos can be even better.The hard part is that all this needs to be done transparently in the background, so as to not ruin the photography experience; slow cameras can ruin the picture taking experience. Bringing the AI onto the phone alleviates that concern.


Local processing is faster than cloud processing by a wide margin. If the cost of this photo processing was speed, most users would become frustrated with the phone. Not only does the NPU allow for a frustration-free experience, it actually enhances the experience, even for amateur shutterbugs.


AI is the future and it’s exciting to see at least one major smartphone manufacturer on board with this trend. It shows a commitment to the consumer and the foresight to see where our phones are taking us in the future.


If you want the kind of enhancements that an NPU brings to the table, you won’t find it anywhere but Huawei and the HUAWEI Mate 10 Series.

It seems that LG is going to “play it safe” this year, as the Korean company decided to delay the official launch of its LG G7 flagship. Instead, they will probably improve one of their most successful smartphones that made an impressive launch last year. Yes, we’re talking about the LG V30s, the improved variant of their 2017 flagship that really impressed most of of us with its price, specs and performance.


Recent rumors say that LG will go for the improved version of the V30 this year at Mobile World Congress, packing 256GB of storage in the device, along with LG Lens, a feature that will bring AI features to its imaging capabilities. Rumor has it that this LG Lens technology will be able to translate text and show online product listing while snapping an image of a product.


It will also offer bar-code recognition and QR code support but this isn’t something ground breaking of course. No need to worry about the rest of its technical specs, as the V30s is expected to keep all the hardware of its predecessor. This means that it will be equipped with a 6-inch FullVision OLED display with QHD+ resolution, Snapdragon 835 chipset, 16-megapixel dual rear cameras, 5-megapixel selfie camera, 3,300mAh battery.


All the recent info suggests that the upgraded LG V30s model will be announced during MWC 2018 and it will hit the stores on March 9 – starting from South Korea with a price tag of approximately 1 million won (~$919).


That’s all we know so far, keep in touch to find out more info on this subject.

Четверг, 08 февраля 2018 01:34

ClearBrain uses AI to help advertisers target the right users

The team at ClearBrain has a big goal: “Our mission is to democratize AI for marketers.”


That’s how CEO Bilal Mahmood put it, though Mahmood (a former product manager at Optimizely) and his co-founder Eric Pollmann (a former engineer on Google’s ad team) aren’t trying to do all that democratizing at once. Instead, they’re tackling a more specific challenge — helping companies target ads toward the users most likely to (say) sign up for a subscription, buy a product or cancel their account.


Mahmood said that this kind of targeting has been available to larger companies, but was too expensive for everyone else, regardless of whether they wanted to buy or build it internally. With ClearBrain, on the other hand, pricing starts at $499 per month, and it’s taking advantage of what Mahmood described as “this growing trend in terms of different API data layers” — namely, the rise of tools like Segment, Optimizely and Heap.


“There was an opportunity to be this intelligence layer on top of the data layers,” Mahmood said.


So ClearBrain pulls data from the tools that businesses are already using, then deploys artificial intelligence to analyze and group users based on how likely they are to perform a specific action. Customers can then use that data to target their Facebook ads, emails or other messaging.


“We’re sort of like the Switzerland of AI,” Mahmood said, because ClearBrain serves as the neutral coordinator between “the data layers and the action layers” that a business might use. As the company adds more capabilities to the platform, he’s hoping it can become “the central nervous system for every marketing team.”


The startup is already working with InVision and theSkimm, among others. There might not seem to be much connection between a design software-maker and a media company centered on newsletters, but Mahmood said the product has been particularly useful to businesses with subscription products, because they have easy-to-follow customer funnels.


ClearBrain is part of the current class of startups at Y Combinator, and it’s also raised $1.2 million in funding from YC, Pear VC, Industry Ventures and Dan Hua Capital, as well as Optimizely founders Dan Siroker and Pete Koomen.

Artificial intelligence and machine learning are terms which have been thrown around a lot in the tech industry over the last few years, but what exactly do they mean? Anyone vaguely familiar with sci-fi tropes will probably have an idea about AI, though they may view it as a little more sinister than what’s around today.


The two terms are often conflated and, incorrectly, used interchangeably, particularly by marketing departments that want to make their technology sound sophisticated. In fact, artificial intelligence and machine learning are very different things, with very different implications for what computers can do and how they interact with us.


It starts with Neural Networks


Machine learning is the computing paradigm that’s lead to the growth of “Big Data” and AI. It’s based on the development of neural networks and deep learning. Typically this is described as imitating the way humans learn, but that’s a bit of a misnomer. Machine learning actually relates to statistical analysis and iterative learning.


Instead of building a traditional program comprised of logical statements and decision trees (if, and, or, etc), a neural network is built specifically for training and learning using a parallel network of neurons, each set up for a specific purpose.


The nature of any particular neural network can be very complicated, but the key to the way they function is by applying weights (or factors of importance) to some attribute of the input. Using networks of various weights and layers, it’s possible to produce a probability or estimation that your input matches one or more of the defined outputs.


The problem with this type of computing, just like regular programming, is its dependence on how the human programmer sets it up, and readjusting all these weights to refine the output accuracy could take too many man-hours to be feasible. A neural network transitions into the realm of machine learning once a corrective feedback loop is introduced.


Enter Machine Learning


By monitoring the output, comparing it to the input, and gradually tweaking neuron weights, a network can train itself to improve accuracy. The important part here is that a machine learning algorithm is capable of learning and acting without programmers specifying every possibility within the data set. You don’t have to pre-define all the possible ways a flower can look for a machine learning algorithm to figure out what a flower looks like.


Stanford University defines machine learning as “the science of getting computers to act without being explicitly programmed”.


Training a network can be done in a number of different ways, but all involve a brute force iterative approach to maximising output accuracy and training the optimum paths through the network. However, this self training is still a more efficient process than optimizing an algorithm by hand, and it enables algorithms to shift and sort through much larger quantities of data in much faster times than would otherwise be possible.


Once trained, a machine learning algorithm is capable of sorting brand new inputs through the network with great speed and accuracy in real time. This makes it an essential technology for computer vision, voice recognition, language processing, and also scientific research projects. Neural networks are currently the most popular way to do Deep Learning, but there are other ways to achieve machine learning as well, although the method described above is currently the best we have. You can read more about how machine learning works here.

What AI is and isn’t


Machine learning is a clever processing technique, but it doesn’t possess any real intelligence. An algorithm doesn’t have to understand exactly why it self-corrects, only how it can be more accurate in the future. However, once the algorithm has learned, it can be used in systems that actually appear to possess intelligence. A good way to define artificial intelligence would be the application of machine learning that interacts with or imitates humans in a convincingly intelligent way.


A machine learning algorithm that can sift through a database of images and identify the main object in the picture doesn’t really seem intelligent, because it’s not applying that information in a human-like way. Implementing the same algorithm in a system with cameras and speakers, which can detect objects placed in front of it and speak back the name in real time suddenly seems much more intelligent. Even more so if it was able to tell the difference between healthy and unhealthy foods, or differentiate everyday objects from weapons.


A good definition of AI is a machine that can perform tasks characteristic of human intelligence, such as learning, planning, and decision making.


Artificial intelligences can be broken down into two major groups, applied or general. Applied artificial intelligence is much more feasible right now. It’s tied more closely to the machine learning examples above and designed to perform specific tasks. This could be trading stocks, traffic management in a smart city, or helping to diagnose patients. The task or area of intelligence is limited, but there’s still scope for applied learning to improve the AI’s performance.


General artificial intelligence is, as the name implies, broader and more capable. It’s able to handle a wider range of tasks, understand pretty much any data set, and therefore appears to think more broadly, just like humans. General AI would theoretically be able to learn outside of its original knowledge set, potentially leading to runaway growth in its abilities. Interestingly enough, the first machine learning discoveries reflected ideas of how the brain develops and people learn.


Machine learning, as part of a bigger complex system, is essential to achieving software and machines capable of performing tasks characteristic of and comparable to human intelligence — very much the definition of AI.


Now and into the future


Despite all the marketing jargon and technical talk, both machine learning and artificial intelligence applications are already here. We are still some way off from living alongside general AI, but if you’ve been using Google Assistant or Amazon Alexa, you’re already interacting with a form of applied AI. Machine learning used for language processing is one of the key enablers of today’s smart devices, though they certainly aren’t intelligent enough to answer all your questions.


The smart home is just the latest use case. Machine learning has been employed in the realm of big data for a while now, and these use cases are increasingly encroaching into AI territory as well. Google uses it for its search engine tools. Facebook uses it for advertising optimization. Your bank probably uses it for fraud prevention.


There’s a big difference between machine learning and artificial intelligence, though the former is a very important component of the latter. We’ll almost certainly continue to hear plenty of talk about both throughout 2018 and beyond.