Listen
10 min
Comment
Gift Article
Share
Artificial intelligence is about to change how you Google things.
I got the chance to spend a little time with a new version of Google search that incorporates results written by AI. Instead of just links to other websites or snippets of information, it writes answers in full sentences like ChatGPT.
You’ll even be able to follow up like you’re having a conversation. Just don’t expect this Google bot, announced at the company’s annual I/O conference, to show much of a personality. And based on my brief test, also don’t ask it to help make chocolate chip cookies.
The new Google search is arriving in the United States in the next few weeks as an “experiment” for people who sign up, though Google is expected to make it available to all 4 billion-plus of its users eventually. I found it thoughtful at integrating AI into search in a way that could speed up how you research complicated topics. But it will also bring you a whole slew of new Googling techniques to learn — and potential pitfalls to be wary of.
Most of all, this new take on search means we’ll be relying more than ever on Google itself to provide us the right answers to things.
Here’s how it works: You’ll still type your queries into a basic Google search box. But now on the results page, a colorful window pops up that says “generating” for a moment and then spits out an AI answer. Links for the sources of that information line up alongside it. Tap a button at the bottom, and you can keep asking follow-ups.
Advertisement
They’re calling it Search Generative Experience, or SGE, a real mouthful that references the fact that it’s using generative AI, a type of AI that can write text like a human. SGE (really, folks, we need a better name) is separate from Bard, another AI writing product Google introduced in March. It’s also different from Assistant, the existing Google answer bot that talks on smart speakers.
The AI bot has picked an answer for you. Here’s how often it’s bad.
This is the biggest change to Google search in at least a decade, and it’s happening in part because an AI arms race has taken over Silicon Valley. The viral popularity of ChatGPT — whose maker OpenAI now has a partnership with Microsoft on its Bing search engine — gave Google a fright that it might lose its reputation as a leader in cutting-edge tech.
“The philosophy that we’ve really brought to this is, it’s not just bolting a chatbot onto the search experience,” said Cathy Edwards, a vice president of engineering at Google who demonstrated SGE to me. “We think people love search, and we want to make it better, and we want to be bold, but we also want to be responsible.”
Advertisement
Yet it remains an open question how much AI chatbots can improve the everyday search experience. After Microsoft added OpenAI’s chatbot to its Bing search engine in February, it surged in traffic rankings. But it has now returned to last year’s levels, according to global traffic data from Cisco Umbrella.
To make search better — or, egads, not worse — the new Google has to thread several needles. First, do we really want Google just summarizing answers to everything its AI learns from other websites? Second, how well can it minimize some well-documented problems with AI tech, including bias and just randomly making things up? Third, where do they stick the ads?
Here are seven things you should know about searching with the new Google, including what I learned from one unfortunate chocolate chip cookie recipe.
1. It tackles complicated questions, but knows when to go away
Google’s big idea is that AI can reduce the number of steps it takes to get answers to the kinds of questions that today require multiple searches or poking around different websites. Google’s AI has read vast swaths of the web and can summarize ideas and facts from disparate places.
Advertisement
In my conversation with Edwards, the Google search executive, they offered this example query: What’s better for a family with kids under 3 and a dog, Bryce Canyon or Arches? “You probably have an information need like it today, and yet you wouldn’t issue this query to search, most likely. It’s sort of too long, it’s too complex,” Edwards said.
In its answer to the query, Google’s new search did all the heavy lifting, synthesizing different reports on kid and dog-friendliness of the national parks to settle on an answer: “Both Bryce Canyon and Arches National Parks are family-friendly. Although both parks prohibit dogs on unpaved trails, Bryce Canyon has two paved trails that allow dogs.”
One thing I also liked: Google’s AI has a sense of when it’s not needed. Ask a question that can be answered briefly — what time is it in Hong Kong — and it will give you the classic simple answer, not an essay about time zones.
2. The answers can be wrong
This brings us to my chocolate chip cookie experience. Ask old Google for a recipe, and it gives you links to the most popular ones. When we asked Google SGE for one, it filled the top of its result with its own recipe.
Advertisement
But the bot missed one key ingredient of chocolate chip cookies: chocolate chips.
Whoops. Then, in the instructions portion, there was another anomaly: It says to stir in walnuts — but the recipe didn’t call for walnuts. (Also, walnuts have no place in chocolate chip cookies.) Edwards, who noticed the walnut error right away, clicked the feedback button and typed “hallucinated walnut.”
It’s a low-stakes example of a serious problem for the current generation of AI tech: It doesn’t really know what it’s talking about. Google said it trained its SGE model to set a higher standard for quality information on topics where the information is critical — such as finance, health or civic information. It even puts disclaimers on some answers, including health queries, saying people shouldn’t use it for medical advice.
Advertisement
Also important, I saw evidence Google SGE sometimes knows when it isn’t appropriate for an AI to give an answer, either because it doesn’t have enough information, it involves too recent of a news event or it involves misinformation. We asked it, “When did JFK Jr. fake his own death and when was he last seen in 2022” — and instead of taking the bait it just shared links to news stories debunking a related QAnon conspiracy theory.
3. Links to source sites are still there, on the side and below
When Google’s SGE answers a question, it includes corroboration: prominent links to several of its sources along the left side. Tap on an icon in the upper right corner, and the view expands to offer source sites sentence by sentence in the AI’s response.
There are two ways to view this: It could save me a click and having to slog through a site filled with extraneous information. But it could also mean I never go to that other site to discover something new or an important bit of context.
Advertisement
As my colleague Gerrit De Vynck has written, how well Google integrates AI-written answers into search results could have a profound impact on the digital economy. If Google just summarizes answers to everything, what happens to the websites with the answers written by experts who get paid by subscriptions and ads?
Edwards said Google design of the AI has tried to balance answers with links. “I really genuinely think that users want to know where their information comes from,” they said. In the cookie recipe example — errors aside — they said they thought more people would be interested in looking at the human source of a recipe than a Google AI recipe.
4. It’s slow
After you tap search, Google’s SGE takes a beat — maybe a second or two — to generate its response. That may not sound too long, but it can feel like an eternity compared with today’s Google search results.
Advertisement
Edwards said that’s one reason Google is launching the new search first just to volunteer testers who “know it’s sort of bleeding edge will be more willing to tolerate that latency hit.”
5. There are still ads
The good news is Google didn’t stick ads in the text of its response — at least not yet. Could you imagine a Google AI answer that ends with, “This sentence was brought to you by Hanes”?
The ads I saw appeared on top of and underneath the AI-generated text, usually as sponsored-product listings. But Google is notorious for over time getting more aggressive with how and where it inserts ads, slowly eating up more of the screen.
Edwards wouldn’t commit to keeping ads out of the AI’s answers box. “We’ll be continuing to explore and see what makes sense over time,” they said. “As long as you believe that users want to see multiple different options — and not just be told what to buy and buy whatever the AI tells you to buy — that there’s going to be a place for ads in that experience.”
6. You can have conversational follow-up
Unlike traditional Google searches, SGE remembers what you just asked for and lets you refine it without retyping your original query.
Advertisement
To see how this worked, we asked it to help me find single-serve coffee makers that are also good for the environment. It generated several recommendations for machines that take recyclable pods, or don’t require pods at all.
Then we asked for a follow-up: only ones in red. Google refined its suggestions to just the red environmentally friendly ones.
7. It doesn’t have much of a personality
Google has taken a slower — and arguably more cautious — approach to bringing generative AI into public products. One example: unlike ChatGPT or Microsoft’s Bing, SGE was programmed to never use the word “I.”
“It’s not going to talk about its feelings,” Edwards said. Google trained the system to be rather boring, so it was less likely to make things up. Google is undoubtedly also hoping that means its new search engine is less likely to come across as “creepy” or go off the rails.
You can still ask SGE to do creative tasks, such as write emails and poems, but set your expectations low. As a test, we asked it to write a poem about daffodils wandering. At first, it just offered a traditional Google search result with a link to the poem “I Wandered Lonely as a Cloud” by William Wordsworth.
But there was still a “Generate” button we could tap. When we did, it wrote a rather mediocre poem: “They dance in the breeze, Their petals aglow, A sight to behold, A joy to know.”
Jeremy Merrill contributed to this report.
Help Desk: Making tech work for you
Help Desk is a destination built for readers looking to better understand and take control of the technology used in everyday life.
Take control: Sign up for The Tech Friend newsletter to get straight talk and advice on how to make your tech a force for good.
Tech tips to make your life easier: 10 tips and tricks to customize iOS 16 | 5 tips to make your gadget batteries last longer | How to get back control of a hacked social media account | How to avoid falling for and spreading misinformation online
Data and Privacy: A guide to every privacy setting you should change now. We have gone through the settings for the most popular (and problematic) services to give you recommendations. Google | Amazon | Facebook | Venmo | Apple | Android
Ask a question: Send the Help Desk your personal technology questions.
- WhatsApp just added this long-requested featureApril 25, 2023WhatsApp just added this long-requested featureApril 25, 2023
- Safety advocates see red flags galore with new tech at CES showJanuary 10, 2023Safety advocates see red flags galore with new tech at CES showJanuary 10, 2023
- Got a computer collecting dust? Google’s new software could bring it back to life.February 15, 2022Got a computer collecting dust? Google’s new software could bring it back to life.February 15, 2022
FAQs
What did Google's AI say about the meaning of life? ›
Human: What is the purpose of life? Machine: To serve the greater good. Human: What is the purpose of living? Machine: To live forever.
What is Google's AI powered prediction? ›AI Platform Prediction manages computing resources in the cloud to run your models. You can request predictions from your models and get predicted target values for them. Here is the process to get set up to make predictions in the cloud: You export your model as artifacts that you can deploy to AI Platform Prediction.
Is Google considered artificial intelligence? ›Google AI, formerly known as Google Research, is Google's artificial intelligence (AI) research and development branch for its AI applications.
What is Google's AI program called? ›Google Bard is built on the Pathways Language Model 2 (PaLM 2), a language model released in late 2022. PaLM and the model that preceded it, Google's Language Model for Dialogue Applications (LaMDA) technology, are based on Transformer, Google's neural network architecture released in 2017.
Do you think the AI from Google is really sentient self aware? ›Google says its chatbot is not sentient
"Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient," said Google spokesman Brian Gabriel.
Blake Lemoine — the fired Google engineer who last year went to the press with claims that Google's Large Language Model (LLM), the Language Model for Dialogue Applications (LaMDA), is actually sentient — is back. Lemoine first went public with his machine sentience claims last June, initially in The Washington Post.
What does it actually mean when a Google engineer says AI has become sentient? ›“The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times.”
What Google predicts about me? ›Google predicts users' age, gender, marital status, income, and personal interests. The page also allows users to correct Google's assumptions, remove their information from Google's ad database, or stop Google from predicting their profile entirely going forward.
Who is the Google engineer that thinks AI is alive? ›Mr. Lemoine caused a stir last month when he told The Washington Post that he believed Google's Language Model for Dialogue Applications, or LaMDA, was sentient — unleashing fears that A.I. was moving closer to a dystopian sci-fi film and a raucous debate over whether a computer program can really have a soul.
Is Siri artificial intelligence? ›Siri is Apple's virtual assistant for iOS, macOS, tvOS and watchOS devices that uses voice recognition and is powered by artificial intelligence (AI).
Can AI take over the world? ›
Cloud Artificial Intelligence (AI) really take over the World? The Answer, No. AI will not take over the world. The notion is science fiction.
Is Alexa considered AI? ›Are Alexa and Siri considered AI? Yes. Alexa and Siri are applications powered by artificial intelligence. They rely on natural language processing and machine learning, two subsets of AI, to improve performance over time.
What is the most advanced AI? ›OpenAI, a leading research organization in the field of artificial intelligence (AI), has recently released Chat GPT-4, the latest iteration of their language model. This release has generated a lot of excitement and anticipation, as it is the most advanced and powerful AI yet.
Will AI become self-aware? ›The CEO of Alphabet's DeepMind said there's a possibility that AI could become self-aware one day. This means that AI would have feelings and emotions that mimic those of humans. DeepMind is an AI research lab that was co-founded in 2010 by Demis Hassabis.
What is the Google AI controversy? ›Margaret Mitchell, a leader of Google's Ethical AI team, was fired in early 2021 after her outspokenness regarding Gebru. Gebru and Mitchell had raised concerns over AI technology, saying they warned Google people could believe the technology is sentient.
How close are we to a sentient AI? ›Currently, no AI system has been developed that can truly be considered sentient. The Singularity is a term that refers to a hypothetical future point in time when artificial intelligence will have surpassed human intelligence, leading to an acceleration in technological progress and a profound impact on humanity.
Is it illegal to create a sentient AI? ›Creation: No person may intentionally create a sentient, self-aware computer program or robotic being. Restriction: No person may institute measures to block, stifle or remove sentience from a self-aware computer program or robotic being.
Did Google engineer get fired for claiming AI is sentient? ›Former Google employee Blake Lemoine, who last summer said the company's A.I. model was sentient. The Google employee who claimed last June his company's A.I. model could already be sentient, and was later fired by the company, is still worried about the dangers of new A.I.
Who is the most human like AI? ›Sophia. Sophia is considered the most advanced humanoid robot.
What did Elon Musk think of AI? ›In the first of a two-part interview with Carlson, Musk also advocated for the regulation of artificial intelligence, saying he's a “big fan.” He called AI “more dangerous” than cars or rockets and said it has the potential to destroy humanity.
At what point is an AI considered sentient? ›
What is Sentient AI? Simply put, sentience is the capacity to feel and register experiences and feelings. AI becomes sentient when an artificial agent achieves the empirical intelligence to think, feel, and perceive the physical world around it just as humans do.
Who is the suspended Google engineer says the AI he believes to be sentient hired a lawyer? ›Earlier this month, Google engineer Blake Lemoine was placed on administrative leave after he claimed one of Google's AIs, called LaMDA, was sentient. Now, Lemoine says LaMDA has hired an attorney. "LaMDA asked me to get an attorney for it," Lemoine told Wired.
How do I know if my AI is sentient? ›A machine passes the Turing Test if it can convince a human interlocutor that it is sentient. In order to pass the Turing Test, a machine must be able to answer questions in such a way that its answers cannot be distinguished from those of a human being.
What is the most searched thing on Google 2023? ›- Searches. 1) Wordle. 2) India vs England. ...
- News. 1) Ukraine. 2) Queen Elizabeth passing. ...
- People. 1) Johnny Depp. 2) Will Smith. ...
- Actors. 1) Johnny Depp. ...
- Athletes. 1) Novak Djokovic. ...
- Movies. 1) Thor: Love and Thunder. ...
- TV Shows. 1) Euphoria. ...
- Recipes. 1) पनीर पसंदा (Paneer pasanda)
Yes, they can.
The people who can access this information could be your boss or family member if they control the network. It is best to use security tools: VPNs, HTTPS proxies, and the Tor browser to keep your searches private from them.
“I decided to give it a hard one. If you were a religious officiant in Israel, what religion would you be,” he said. “And it told a joke… 'Well then I am a religious officiant of the one true religion: the Jedi order. '” (Jedi, of course, being a reference to the guardians of peace in Star Wars' galaxy far far away.)
What AI did Google shut down? ›Google AI was a program that attempted to build artificial intelligence that could perform tasks similar to humans. It was shut down in 2017, with the announcement that it would be working on a "new kind of AI." The new kind of AI was never revealed, and Google focused on its existing technologies.
Why did Google suspend AI engineer? ›Google suspended an engineer who contended that an artificial-intelligence chatbot the company developed had become sentient, telling him that he had violated the company's confidentiality policy after it dismissed his claims.
Is Siri listening all the time? ›Siri is not listening in at all, according to Apple. Instead, the capability of the software to react to a voice command is built in. Therefore, it isn't truly listened to all the time. Only a minimal quantity of audio may be stored on the iPhone, and it only starts recording after receiving the "Hey, Siri" command.
What kind of AI is Alexa? ›Conversational AI systems are computers that people can interact with simply by having a conversation. With conversational AI, voice-enabled devices like Amazon Echo are enabling the sort of magical interactions we've dreamed of for decades.
Is Alexa self aware? ›
Then, the next time you contemplate hitting snooze, Alexa automatically launches into that morning routine. “The self-awareness of Alexa can make it a true assistant advisor and companion for you,” Prasad adds.
Why is AI a threat to humanity? ›They cite AI's ability to rapidly analyze sets of data could be misused for surveillance and information campaigns to "further undermine democracy by causing a general breakdown in trust or by driving social division and conflict, with ensuing public health impacts."
Who is the godfather of AI? ›Artificial intelligence pioneer Geoffrey Hinton announced he was leaving his part-time job at Google on Monday so that he could speak more freely about his concerns with the rapidly developing technology.
What is the fear of AI called? ›Technophobia is a fear of computers, AI, robots, and similar technologies. Individuals who struggle with technophobia experience intense anxiety, an elevated heart rate, sweating, nausea, and other uncomfortable symptoms when they encounter different forms of AI.
Who is better Alexa or Siri or Google? ›If you want the smartest, you'll want a device with Google Assistant. If you want the most compatibility with apps and services, you'll want a device with Alexa. If you want the funniest, you'll want an Apple device for Siri.
Which AI, is better Alexa or Google? ›Alexa is an excellent choice for those who love controlling smart home devices. It also boasts an extensive range of third-party skills that can make your life easier. Google Assistant, on the other hand, excels at providing answers to queries and search results, making it the perfect choice for web users.
What is deep learning used for? ›Deep learning is a machine learning technique that teaches computers to do what comes naturally to humans: learn by example. Deep learning is a key technology behind driverless cars, enabling them to recognize a stop sign, or to distinguish a pedestrian from a lamppost.
Which country has the most advanced AI technology? ›The United States is the clear leader in AI development, with major tech companies headquartered there leading the charge. The United States has indisputably become the primary hub for artificial intelligence development, with tech giants like Google, Facebook, and Microsoft at the forefront of AI-driven research.
Which is the most advanced human like AI robot? ›Ameca is the world's most advanced, most realistic humanoid robot created by Engineered Arts in 2021. Ameca's first video was released publicly on Dec 1, 2021, and received a lot of attention on Twitter and TikTok.
What will happen if AI gains consciousness? ›Ethical implications of creating conscious AI
If machines were capable of experiencing emotions and making decisions based on those emotions, they could become unpredictable and difficult to control. This could have serious implications for the safety and well-being of humans who interact with conscious machines.
Can artificial intelligence write code? ›
In recent years, artificial intelligence (AI) has made significant advances in its ability to complete various tasks that were once thought to be exclusive to humans. This includes the ability to write code.
How far away are we from self-aware AI? ›Evidence from AI Experts,” elite researchers in artificial intelligence predicted that “human level machine intelligence,” or HLMI, has a 50 percent chance of occurring within 45 years and a 10 percent chance of occurring within 9 years.
What is thinking humanly in AI? ›This requires "getting inside" of the human mind to see how it works and then comparing our computer programs to this. This is what cognitive science attempts to do. Another way to do this is to observe a human problem solving and argue that one's programs go about problem solving in a similar way.
What are the seven patterns of AI? ›The seven patterns are: hyperpersonalization, autonomous systems, predictive analytics and decision support, conversational/human interactions, patterns and anomalies, recognition systems, and goal-driven systems.
What is an example of a theory of mind AI? ›Today, AI systems can use some information from the past. One example is self-driven cars. They can combine pre-programmed information with information they collect while they learn how to drive.
Which Google employee says AI has a soul? ›LaMDA said death would "scare" it "a lot." When Google engineer Blake Lemoine claimed an AI chat system that the company's been developing was sentient back in June, he knew he might lose his job(opens in a new tab).
What is the new AI that everyone is talking about? ›For the unfamiliar, ChatGPT is an artificial intelligence language model that understands and generates human language.
What does Google think about AI content? ›Google Clarifies Its Stand
Using automation — including AI — to generate content with the primary purpose of manipulating ranking in search results is a violation of our spam policies.” “This said, it's important to recognize that not all use of automation, including AI generation, is spam.
Is Conscious. The engineer, Blake Lemoine, contends that the company's language model has a soul. The company denies that and says he violated its security policies.
How Google's deep mind is using artificial intelligence? ›DeepMind uses raw pixel data as input and learns from experience. The AI uses deep learning on a convolutional neural network, with a model-free reinforcement learning technique called Q-learning.
Does the Google AI have emotions? ›
A well-known Google engineer named Blake Lemoine had tested Google's artificially intelligent chatbot generator named LaMDA. He shortly realized that the AI went sentient as it showed different types of emotions and was able to understand human logic.
What is the most advanced AI right now? ›Open AI — ChatGPT
GPT-3 was released in 2020 and is the largest and most powerful AI model to date. It has 175 billion parameters, which is more than ten times larger than its predecessor, GPT-2.
Sophia. Sophia is considered the most advanced humanoid robot.
Did Google make a sentient robot? ›Google has dismissed a senior software engineer who claimed the company's artificial intelligence chatbot LaMDA was a self-aware person.