What would you like to see in Google apps with more AI

“The people would have said faster horses if I had asked them what they wanted.” It is difficult to foresee the future of technology because it only takes one breakthrough to entirely change the paradigm. This sentiment and its variants, such as “people don’t know what they want until you show it to them,” are examples of this. In particular, it applies to the upcoming wave of AI features for both new and legacy Google apps.

A misconception

Google wasn’t taken by surprise by what was to come. In its most important annual event, the I/O developer conference, the business discussed large language models (LLMs) and natural language understanding (NLU) in public. Speaking to Pluto was one of the demos for Language Model for Dialog Applications in 2021, and chatting to the AI Test Kitchen app was one of the demos for LaMDA 2 last year.

The Multitask Unified Model (MUM) is another option that, in the future, will be able to respond to the question, “I’ve ascended Mt. Adams and now want to hike Mt. Fuji next fall. What need I do differently to prepare?” … the upcoming capability to use Google Lens to snap an image of a damaged bike part and receive information on how to fix it.

Sundar Pichai didn’t just describe the technology; he also claimed, “Natural conversation skills have the potential to make information and computing substantially more accessible and easy to use.” Google particularly mentioned Search, Assistant, and Workspace as the areas where it intends to “[incorporate] greater conversational features.”

Also Read : Ahead-of-its-time misunderstood, Lady Gaga’s “Artpop” Predicted the Future of Pop

However, as subsequent debate has shown, that was insufficient to jog people’s memories. Instead, Google is at fault for failing to give more detailed examples that effectively conveyed to the general public how these new AI features will enhance the products people use on a daily basis.

However, even if more specific examples had been given in May 2022, they would have been soon surpassed by the introduction of ChatGPT later that year. There is nothing more tangible than actual experience, and the OpenAI demo/product is currently usable (and priced accordingly). It has sparked numerous concerns about how direct responses might affect Google’s ad-based business model. The idea is that if consumers already had the solution in the form of a created and summarised sentence, they wouldn’t need to click on links.

The pace with which rivals have incorporated these new AI developments into selling apps caught Google off guard. It is clear from the “code red” that the business didn’t believe it would be required to roll out anything beyond demos so quickly. Executives are quick to point out how what’s now on the market “can make stuff up,” which would be reputationally detrimental if it ever launched on something the magnitude of Google Search. Safety and accuracy concerns are something Google has specifically addressed with its existing previews.

What’s coming

The same day that Google announced layoffs, a leak from the New York Times described over 20 AI projects the company planned to debut this year, starting with I/O 2023 in May.

These announcements, which are likely being spearheaded by a “search engine with chatbot characteristics,” appear to be aimed squarely at OpenAI. A Pixel wallpaper maker may be a division of a “Image Generation Studio” that competes with DALL-E, Stable Diffusion, and Midjourney. This is particularly telling. Of course, Google will immediately enter the art world’s pushback against generative image AIs.

Nothing that was released appears to fundamentally alter how the average user interacts with Google products, with the exception of Search (more on that later). Of fact, that has never been Google’s strategy, which has been to add little conveniences to already-existing goods, or even just parts of them, as new technology becomes available.

Smart Compose in Docs and Gmail doesn’t quite create the email for you, but the auto-complete ideas are actually helpful. Smart Reply is available in Gmail, Google Chat, and Messages.

On Pixel, there are features like Call Screen, Hold for Me, Direct My Call, and Clear Calling that leverage AI to enhance the original main use cases of a phone, and on-device speech recognition enables a great Recorder and a quicker Assistant. Naturally, computational photography and Magic Eraser are also options.

That’s not to say Google hasn’t used AI to develop wholly original applications and services. Natural language understanding developments led to Google Assistant, whereas computer vision developments that enabled search and categorization in Google Photos took place over seven years ago and are now taken for granted.

More recently, Google Maps’ Live View feature offers augmented reality (AR) instructions, while Google Lens allows for visual search by snapping an image and adding questions to it.

There is also the Search and AI

Post-Chat GPT, where people are imagining a search engine where your questions are directly answered by a sentence that was entirely generated for you/that query, as opposed to getting links or being shown a “Featured Snippet” that quotes a relevant website that might have the answer.

If you can find the answer by reading just one line in a Knowledge Panel—whether it’s a date, time, or other straightforward fact—you won’t always (or even often) want to read a full phrase to do so.

It will take some time before people start to believe any company’s chatbot search’s generating and summarising powers. I can at least quickly determine whether I trust the magazine or source by looking at featured snippets.

With Google Assistant currently turning to facts (dates, locations, etc.) that it already knows (Knowledge Panels/Graph) and Feature Snippets otherwise, that direct sentence is in many ways what intelligent assistants have been waiting for. It’s logical to assume that when you interact with voice technology, you can’t always look at a screen and that you expect a quick response.

It doesn’t feel like the technology is there yet, even though the history of technology is full of iterative updates that were quickly supplanted by brand-new, game-changing inventions. I recall how early voice assistants made a conscious effort to mimic human speech in a machine. How long will the novelty remain when this new wave of AI comes close to answering your questions or doing a task for you?

Read Previous

25 years of enhancing the health and standard of living of senior citizens

Read Next

The focus of the NIHD Healthy Lifestyle Talk is infusion therapy

Leave a Reply