قالب وردپرس درنا توس
Home / Apple / Google I / O responses: What's new with Google Assistant

Google I / O responses: What's new with Google Assistant



I was very excited and honored this year to be elected to attend the 201 I / O Conference in Google at the Shoreline Amphitheater in Mountain View, California. One of the technologies I was most excited to hear about was Google Assistant. Google Assistant is a virtual assistant created by Google that has grown to support 19 languages ​​in 80 countries. There are over a million actions for the assistant and are available in over one billion units. Through evolution, the assistant has fascinated me greatly because it allows me to interact with my devices using only my voice. I am a busy mom and professional who is on the go a lot. I have many Google Assistant Home devices throughout my house and can talk to Google from any room. This has helped me in many ways to be a more happy and productive person. When I'm not within range of a Google Assistant enabled device, I find myself throwing out "OK, Google" in vain:].

Meeting Next Generation Assistant

Improvement Rate

When I committed myself to Keynote on Google I / O, I was not disappointed with the new ads about the assistant. One of the biggest challenges I experience with Google Assistant is that it needs an internet connection to understand what I've said. This can be particularly frustrating if the request I made does not require connection, that is, putting a timer. Currently, the treatment of the assistant's speech is very complicated. It involves many machine learning models, including one for mapping incoming audio bits into phonetic devices, a second for assembling the units into words, and then a third for predicting the sequence of words. To do this, 100 Gigabyte requires space and the need for network connectivity. Google made the groundbreaking announcement that they could reduce this to half a gigabyte. This allows these models to be stored locally on the device. This allows the assistant to process speech in flight mode. This will increase the assistant's speed 10x. I was ecstatic to hear about this.

Demoing Next Generation Assistant

After this exciting announcement, Keynote took on a demonstration of the new and faster assistant who was quite impressive. You can see this demonstration here. The demonstration shows that Google Assistant quickly handles back to back commands. These commands include searching for specific images, ordering a lift car, setting a timer, taking a picture, checking the weather, and various other requests. I was impressed that "Hi, Google" just had to be said once. The assistant could also navigate pictures and check on flight time while responding to a text message. The ability to multi-task using the assistant is greatly improved. The assistant can handle more complicated speech scenarios so that the user can compose and send an email. I can only imagine how different it would be to use the next generation assistant without the need for a tour of the network.

Adding more personalization

Google Assistant will be more personal in the future with features such as "Options for you", a feature that selects recipes on a personal basis. This uses a technique called reference resolution, which makes it possible to understand sentences such as "mother's house". Obviously, this will often refer to the mother's house. However, it can also be the name of a grocery store or a restaurant. By using personal reference resolution, associations like this can be created by the assistant.

Next generation assistant has the ability to set personal reminders. As a mother, this will be a great addition to our household. I will be able to remind my teenager about things when I'm not around. Finally, it was exciting to hear that you no longer have to say "Hello Google" to stop alarms!

Enable a new driving mode

Earlier this year, the assistant was added to Google Maps. It was announced that Google Assistant will now work with Waze and it will be an improved driving mode. In the future, there will be a "driving mode" for the assistant. The dashboard will bring the most relevant activities ahead. It will show things like the ability to navigate to a destination for an upcoming appointment in your calendar. It can show you podcasts you often listen to at certain times of the day during the commute. You will be able to use Google Assistant without leaving navigation mode to send text and answer phone calls.

Are you happy to try it? Next-generation assistant first appears on Pixel 4, which is rumored to be available in October 2019:].

Learn all about Google Assistant

Having heard all the announcements in Keynote, I was delighted to see some of the new features for developers in the calls and the Google Assistant dome.

There were many great presentations on how to get started with Google Assistant. The conversations were a great overview for those who are new to the Google Assistant, and broke down the various groups of individuals who might want to use the assistant:

  • Content Law and Web Developers – Templates and Lookups Available for Enhance Search, Tutorial Training and a FAQ function.
  • Android App Developer – App Actions and Discs.
  • Innovator in conversation space – Conversation actions with interactive canvas to build up experiences on smart screens.
  • Hardware Developer – Smart Home SDK.
Note : If you're interested in writing your own Google Asset action, I've written a tutorial on how to get started on this topic here.

Improve existing content for Google Assistant [19659004] Talk Enhance your search and assistants' presence with structured data detailed for web developers on how to use Structured Data and announced two new types as the now supports. Structured data makes it easier for developers who have existing web content to create lush search results of that content without spreading it to all the different platforms. This can help a developer reach a wider audience. Structured data is supported podcasts, recipes and news last year, and this year it will support how-tos and FAQ templates. Video objects can be used with the customized paint for people who have good tips on YouTube. It's as easy as filling out a Google sheet to create the stencil. How guided experiences look good on a smart screen. This conversation also showed how to use the actions on Google Simulator and actions on Google Analytics to view and test the structured selection as it is being developed. This can be an easy way to bring existing content to life in the Google Search window. This is all good information for web developers, but what about Android developers? The most exciting talk was still coming.

Using the assistant in your Android app

App actions for an Android app include slices and conversation projects. By using app apps in the app you can really help expand the reach of your app. The main reason for using appeals is to increase the user's commitment. Often, applications are buried in a long list of user-installed apps. Having an action that can deeply associate with your app for a particular function of the assistant increases the chance that the user will discover that feature. It makes it easier and convenient for the user to engage with your app.

Adding actions to an app

It doesn't take much developer effort to add actions to an app. All a developer has to do is add a actions.xml file for the directory res / xml of the app. This file contains action blocks that represent app actions. In the action block, there may be one or more fulfillment mechanisms that map the actions of the fulfillment intentions. The fulfillment may include parameters that can be extracted. Google uses its own Natural Language Processing to match the requests of the appropriate actions so that the developer does not have to worry about it, and the intentions can be built using the Dialogflow console.

Disc discs

Discs are the sequel to app widgets in Android. By making a small change in the actions.xml file to indicate that the fulfillment is to be performed via discs, the fulfillment will be displayed directly in the assistant. Discs are mainly the visual representation and enhancement of the app feature. They show rich, dynamic and interactive content

Find & # 39; toothbrush & # 39; moments

There were many conversations about conversion design for good measures. One of the conditions I often heard was toothbrush moments. It's not supposed to exploit actions in an app to create voice actions for all the features of the app. Instead, it is best to find the features that the user will be able to use when close to a Google Assistant enabled device. "Starting a race" is a good example of a toothbrush moment, which is "how many calories have I burned?" This was demonstrated during the conversation with the Nike Run Club app.

Converting Through Actions

One problem with App Actions and Slices is that they are only available on devices where the app is installed. Because there are many devices that have the assistant, some not even based on Android, there are Conversational Actions that are universal across all devices. Many conversations were focused on building conversation projects, including one called Quality Assistance Design for Google Assistant .

Now that this discussion explained what was available to the Android developer, I was curious to build games with Interactive Canvas.

Building Interactive Experience with Interactive Canvas

Interactive Canvas is a way to use HTML, CSS and JavaScript to build rich, interactive Google Assistant experiences. It can be used to create fullscreen images and custom animations. It was a talk specific to creating games with Interactive Canvas. The home button is a great target for these interactive actions. Rich responses are used in conjunction with the URL of a web application to allow the developer to create an immersive experience. The developer used Dialogflow to create the custom calls the user will have with the action. The developer has full control over the conversation flow. Then, the developer uses pixel level control over the screen for games where some HTML, some CSS and some JavaScript can be run.

Give a SDK to Smart Home

One of the Biggest Announcements on Google I / O Keynote was Nest Hub Max. One of the lesser known announcements that can have a major impact on developers is the developer package, known as the Local Home SDK. This was demonstrated in sandbox demos with a visual representation of toys. Basically, it allows home devices such as smart speakers and smart screens to send requests to third-party gadgets such as lights, thermostats and cameras on the local network rather than via the cloud. This would be good for those days when the internet is acting! Google has also made it easier to set up equipment like GE smart lights using the Google Home app. This will streamline the process of setting up devices for the consumer. They release 16 new device types and three new device properties for smart home actions developers. It was announced that there would be more details about the Google Assistant Connect platform later this year. This is the program that allows smart home device developers to easily add the assistant to their devices at a low cost. Google says it has been working to develop products through this program with Anker, Leviton and Tile.

Where are you going from here?

This was an exciting Google I / O for those interested in the assistant. There were many conversations for anyone concerned with the assistant, including web developers, Android developers, hardware developers, and the most important, us, consumer. For the developer, there were many conversations about how to develop good Google actions in a number of contexts. If you are interested in checking out any of these calls or seeing a sandbox tour, please check out the links below!

Assistant-related calls from Google I / O 2019 can be found here.
A trip to the Google Assistant Sandbox Demo Tent can be found here.


Source link