Google has commenced this year I/O at Shoreline Amphitheater in Mountain View, California. The key note of the conference had some interesting announcements giving the full focus on their research in Artificial Intelligence (AI). Moreover, they have rebranded their research division to Google AI. The three-day event will have more to unveil, meanwhile let us see a summary of the day one announcement.
Google Fixed The Emojis
This might seem funny, but media has been all over Google for their flawed Hamburger emoji. However, Google did fix the emoji by placing the cheese layer above patty. While working on the same, they came across the beer emoji in which there was a gap between the beer and the froth. This too was fixed, and Sundar was happy to announce it with a chuckle.
Responsibility of Google
Sundar went on with the progress they have made since last years I/O. He said that Google has the responsibility of providing digital skills to communities around the world. Until now, they have trained over 25 mn people and is expecting to cross 60 mn in the next five years. So ultimately they want to make information more useful, accessible and beneficial to society.
Research At Google
Google has been focussing too much on AI research that they have renamed their entire research division as Google AI. They have been trying to bring the benefits of AI globally. Sundar stressed that the impact of AI could transform several fields especially healthcare.
Diabetic Retinopathy: Google has been conducting field trials in Sankara and Aravind hospitals in India for diagnosing Diabetic Retinopathy which is the leading cause of blindness. Their deep learning technology has helped these hospitals to provide an expert diagnosis without the help of trained doctors. But it turned out that the same retinal scan could provide more insights with the help of AI systems. The AI could make advance predictions of a cardiovascular event, stroke or heart attack up to five years with these retinal images. This could provide the basis for a new non-invasive way to detect cardiovascular risk.
Predicting Medical Events: The AI is also into helping doctors with predicting medical events. It can give 24-48 hours advance notice before a patient is likely to get better, sick, or discharged. Google has been working on the same with partner hospitals using de-identified medical records and were able to predict chances of readmission.
Accessibility: Google AI can make day to day use cases much easier for people. Sundar demonstrated AI enhanced closed caption, in which the subtitles were displayed right below the person talking, when more than one person is on the screen. This will be helpful for persons with hearing disabilities. Another interesting accessibility enhancement was the addition of Morse code input in Gboard. Sundar thanked Tania, who worked with their team for developing this feature and overcame her disability with the help of Gboard.
AI in Gmail: Google has added Smart Compose feature, which uses machine learning to suggest phrases for you as you type. You can just hit the tab and complete the email. This feature will be rolling out along with the recently announced features of Gmail.
AI in Google Photos: A new feature called suggested actions is being added to Google Photos. The list of suggestions includes, share photos with others in the picture, fix brightness, fix and convert the document to pdf, and giving color to your good old black and white photos. All these features will be coming out in the next few months.
Pichai added that all these advancements were possible with the help of TPU(Tensor Processing Unit) 2.0 units that Google unveiled during last I/O. But Google had created a new powerful version of it, the TPU 3.0. This one is eight times more powerful than TPU 2.0 and reaches over hundred petaflops. So the team had to add liquid cooling for their latest variant.
Improved Google Assistant
Google has decided to make the conversation with their Assistant natural and comforting. The Assistant will be using a tech called Wavenet with which the raw audio will be converted to a more natural voice. For this, they are also adding new voices(3 male and 2 female) in addition to the current voice(Holly). You will also be hearing the voice of John Legend, the famous American singer, songwriter, actor for certain occasions.
Google Assistant now allows you to involve in a continued conversation rather than just saying ‘Hey Google’ again and again. The Assistant can now understand multiple actions from a statement said to it. Google is still continuing to add thousands of family activity to the Assistant like games, stories and more. By the end of 2018, the Assistant will support 30 languages and will become available in eighty countries. Google Assistant now encourages children to use Please while talking with it with the newly added ‘Pretty Please’ feature.
Google Smart Displays: These are devices which will bring the Assistants voice along with some rich visual experience. They are working with JBL, Lenovo, LG etc to bring these smart displays to market by the month of July. These Smart display allows you to perform video calls, view photos, watch videos, and more while you are multitasking.
The Assistant in our phone is also getting more immersive, interactive and proactive. It uses the full device screen to display results, suggestions, visual controls of smart devices and more. The Google has worked with partners like Dominos, Just eat, Dunkin Donuts etc to provide a new Food pickup and delivery experience right within the Assistant. The Assistant also can interact proactively by giving you a visual snapshot of your day with a swipe, that includes, your commute, reminders, and suggestions. Meanwhile the Google Assistant is also coming to Google Maps to help with your navigation.
Google Assistant will soon be able to schedule appointments for you by actually placing a phone call upon your request. Sundar demonstrated the same with a haircut appointment which felt like a Siri killer for us. The Assistant will also let you know the date and time once an appointment is fixed via a notification. Google Duplex, a combination of various techs like, natural language understanding, deep learning, text-to-speech etc makes this possible. It can also cope with calls that would not go as expected by understanding the context and nuance. Business hours on Google Maps will also be updated automatically with the help of such appointment calls made by anyone.
Google’s Focus on Digital Wellbeing
We all are tethered to our smartphones and many of us have the fear of missing it out. But Google says that it should be a joy of missing out instead. For this, they are focussing on the digital wellbeing of its users by understanding habits, turning off the device, wind down and help you concentrate on what that matters to you most. A new feature called Dashboard added to Android will give you an idea of how you are spending your time.
Even upcoming version of apps from Google like YouTube can remind you to take a break if you have been watching a video for a while. YouTube will be also able to combine all the notifications to one single digest so that you won’t be disturbed every time your subscription sends a notification. Another important tool added to the digital wellbeing is the Family Link, where parents can control their kid’s screen time. Google is also training kids on how to be safe on the web via a Be Internet Awesome program.
Updated Google News
Google has decided to promote quality journalism which in turn would create a good foundation for democracy. For this Google is going power their news with AI, which can combine various sources to give you a deep insight of full news. In short, the new Google News allows you to keep up with the news you care about, understand the full story with temporal co-locality and support the sources. It will also be featuring Google’s Material theme which will improve the reading experience.
Newscasts is another feature being added to Google News, which allows you to get a preview of the story. They have also developed a Google Newsstand app, with which you can read different newspapers, magazines and even subscribe with google. The subscribe with Google will also allow you to access the paid content from any platform while connected to your account. These News features mentioned above will soon be rolling out to every Google user.
Android P Announcements
The next version of Android, the Android P will be having AI at its core forming the basis of intelligence, simplicity and digital well being. With the help of on-device machine learning, the OS can adaptively save battery juice. This is achieved by reducing CPU wakeups by about 30 percentage and running background processes on the small CPU core.
The new Android P will have adaptive screen brightness which also learns your personal preference as you manually adjust it. The next time you move to the similar lighting conditions, it auto adjusts the screen brightness based on your personal preferences.
Another update coming to Android P has predicted app actions, which is more than just predicting the next app you are likely to launch. These app actions will surface anywhere in the interface like app launcher, google search, play store etc. For example, if you search a movie on Google app, app actions like book tickets will appear. App developers can implement this by creating an actions xml file which then works with the help of Slices API.
Simplicity: The Android P’s UI will be featuring, new system navigation, new volume controls, and rotation confirmation. better notifications, work profiles, quick settings, better screenshots, improved crash dialogues, status bar and more.
Digital Wellbeing: We have already gone through this topic. The latest version of Android will have a dashboard, app timer, do not disturb, flip gesture(shush) and wind down. The app timer feature lets you know that you have already spent enough time on a particular app. Flip gesture codenamed shush automatically switches your phone to do not disturb mode if you place your phone upside down on a table. But then you are still reachable from a list of selected contacts. Wind down is another feature that gets activated when it is time for your bed. The phone switches to do not disturb mode and at the same time changes the display into an uninviting grey scale mode.
Along with these announcements, Google has also unveiled the Android P Beta, which will be available for Google Pixel and seven other flagship devices viz Nokia, Vivo, OnePlus, Xiaomi, Sony, Essential and OPPO.
ML Kit For Firebase
It is a new software development kit(SDK) that brings the machine learning expertise to mobile developers. The ML gives developers five ready to use APIs that include Text recognition face detection, barcode scanning, image labeling, and landmark recognition. You can learn more about the ML Kit here.
Google Maps Update
Google can now automatically add buildings, places, addresses, to the maps from the street view or satellite imagery. With AI, the maps can give lots of suggestions for route, parking etc and also make accurate traffic predictions. The new Maps also feature trending lists of restaurants, malls or other places. Based on your previous visits and ratings, Maps will give you a ‘for you tab’ with ‘your match score’, so that you can quickly select similar places with confidence. Google Maps also helps you in planning with your friends to pick a place. You can create a shortlist in the Maps, then share it with your friends. This shortlist also features a voting in real time, that helps your group to make a decision.
Another major update is the integration of phone camera for map navigation. It works like this, if you’re confused at a junction, just open the camera and point it any direction. You’ll get a street view along with the map UI, giving you an idea of where you are. This feature works with the help of Visual Positioning System(VPS) along with GPS technology. Google has also considered adding an animated guide to show you the way.
Google Lens Integration
You may already be aware of the Google Lens implementation in Google Photos and Assistant apps. They have now decided to integrate Lens right into the camera app of Pixel and selected smartphone brands. The three new features coming to Google Lens are – smart text selection, style match, and real-time world info. The Smart text selection allows you to copy contents from real-world texts in your photos. While style match feature allows you search for similar things from your photos like clothes, furniture etc. The third feature is a bit complicated, which uses on-device machine learning and cloud power to show real-time results right within the camera frame. All these features will be rolling out in the next few weeks to every user.
Google’s Self Driving Cars
The developments in AI by Google has helped a lot in their research with self-driving cars. This project is now called Waymo, which is the only company in the world with a fleet of self-driving cars. Google has already planned to start this service by the end of 2018 with Phoenix being the first stop. You can learn more about these self-driving cars here.