igforum.bio / ai-may-be-catching-up-with-human-reasoning - 102088
S
%Begin AI May Be Catching up With Human Reasoning GA
S
REGULAR Menu Lifewire Tech for Humans Newsletter! Search Close GO News &gt; Smart & Connected Life <h1>
AI May Be Catching up With Human Reasoning</h1>
<h2>
Matching wits against algorithms</h2> By Sascha Brodsky Sascha Brodsky Senior Tech Reporter Macalester College Columbia University Sascha Brodsky is a freelance journalist based in New York City.
%Begin AI May Be Catching up With Human Reasoning GA S REGULAR Menu Lifewire Tech for Humans Newsletter! Search Close GO News > Smart & Connected Life

AI May Be Catching up With Human Reasoning

Matching wits against algorithms

By Sascha Brodsky Sascha Brodsky Senior Tech Reporter Macalester College Columbia University Sascha Brodsky is a freelance journalist based in New York City.
thumb_up Like (50)
comment Reply (1)
share Share
visibility 411 views
thumb_up 50 likes
comment 1 replies
E
Emma Wilson 3 minutes ago
His writing has appeared in The Atlantic, the Guardian, the Los Angeles Times and many other publica...
T
His writing has appeared in The Atlantic, the Guardian, the Los Angeles Times and many other publications. lifewire's editorial guidelines Published on April 12, 2022 10:13AM EDT Fact checked by Jerri Ledford Fact checked by
Jerri Ledford Western Kentucky University Gulf Coast Community College Jerri L. Ledford has been writing, editing, and fact-checking tech stories since 1994.
His writing has appeared in The Atlantic, the Guardian, the Los Angeles Times and many other publications. lifewire's editorial guidelines Published on April 12, 2022 10:13AM EDT Fact checked by Jerri Ledford Fact checked by Jerri Ledford Western Kentucky University Gulf Coast Community College Jerri L. Ledford has been writing, editing, and fact-checking tech stories since 1994.
thumb_up Like (26)
comment Reply (1)
thumb_up 26 likes
comment 1 replies
E
Elijah Patel 2 minutes ago
Her work has appeared in Computerworld, PC Magazine, Information Today, and many others. lifewire's ...
L
Her work has appeared in Computerworld, PC Magazine, Information Today, and many others. lifewire's fact checking process Tweet Share Email Tweet Share Email Smart & Connected Life Mobile Phones Internet & Security Computers & Tablets Smart Life Home Theater & Entertainment Software & Apps Social Media Streaming Gaming Researchers have created techniques that let users rank the results of a machine-learning model&#39;s behavior. Experts say the method shows that machines are catching up to humans&#39; thinking abilities. Advances in AI could speed up the development of computers&#39; ability to understand language and revolutionize the way AI and humans interact.
Her work has appeared in Computerworld, PC Magazine, Information Today, and many others. lifewire's fact checking process Tweet Share Email Tweet Share Email Smart & Connected Life Mobile Phones Internet & Security Computers & Tablets Smart Life Home Theater & Entertainment Software & Apps Social Media Streaming Gaming Researchers have created techniques that let users rank the results of a machine-learning model's behavior. Experts say the method shows that machines are catching up to humans' thinking abilities. Advances in AI could speed up the development of computers' ability to understand language and revolutionize the way AI and humans interact.
thumb_up Like (36)
comment Reply (2)
thumb_up 36 likes
comment 2 replies
L
Lily Watson 1 minutes ago
KanawatTH / Getty Images A new technique that measures the reasoning power of artificial intelligenc...
S
Sophia Chen 7 minutes ago
"Today, AI is capable of reaching (and, in some cases, exceeding) human performance in specific task...
N
KanawatTH / Getty Images A new technique that measures the reasoning power of artificial intelligence (AI) shows that machines are catching up to humans in their abilities to think, experts say. Researchers at MIT and IBM Research have created a method that enables a user to rank the results of a machine-learning model's behavior. Their technique, called Shared Interest, incorporates metrics that compare how well a model's thinking matches people's.
KanawatTH / Getty Images A new technique that measures the reasoning power of artificial intelligence (AI) shows that machines are catching up to humans in their abilities to think, experts say. Researchers at MIT and IBM Research have created a method that enables a user to rank the results of a machine-learning model's behavior. Their technique, called Shared Interest, incorporates metrics that compare how well a model's thinking matches people's.
thumb_up Like (46)
comment Reply (1)
thumb_up 46 likes
comment 1 replies
J
Jack Thompson 11 minutes ago
"Today, AI is capable of reaching (and, in some cases, exceeding) human performance in specific task...
C
"Today, AI is capable of reaching (and, in some cases, exceeding) human performance in specific tasks, including image recognition and language understanding," Pieter Buteneers, director of engineering in machine learning and AI at the communications company Sinch, told Lifewire in an email interview. "With natural language processing (NLP), AI systems can interpret, write and speak languages as well as humans, and the AI can even adjust its dialect and tone to align with its human peers." 
 <h2> Artificial Smarts </h2> AI often produces results without explaining why those decisions are correct. And tools that help experts make sense of a model’s reasoning often only provide insights, only one example at a time.
"Today, AI is capable of reaching (and, in some cases, exceeding) human performance in specific tasks, including image recognition and language understanding," Pieter Buteneers, director of engineering in machine learning and AI at the communications company Sinch, told Lifewire in an email interview. "With natural language processing (NLP), AI systems can interpret, write and speak languages as well as humans, and the AI can even adjust its dialect and tone to align with its human peers."

Artificial Smarts

AI often produces results without explaining why those decisions are correct. And tools that help experts make sense of a model’s reasoning often only provide insights, only one example at a time.
thumb_up Like (23)
comment Reply (0)
thumb_up 23 likes
A
AI is usually trained using millions of data inputs, making it hard for a human to evaluate enough decisions to identify patterns. In a recent paper, the researchers said that Shared Interest could help a user uncover trends in a model’s decision-making. And these insights could allow the user to decide whether a model is ready to be deployed.&nbsp; “In developing Shared Interest, our goal is to be able to scale up this analysis process so that you could understand on a more global level what your model’s behavior is,” Angie Boggust, a co-author of the paper, said in the news release.&nbsp; Shared Interest uses a technique that shows how a machine-learning model made a particular decision, known as saliency methods.
AI is usually trained using millions of data inputs, making it hard for a human to evaluate enough decisions to identify patterns. In a recent paper, the researchers said that Shared Interest could help a user uncover trends in a model’s decision-making. And these insights could allow the user to decide whether a model is ready to be deployed.  “In developing Shared Interest, our goal is to be able to scale up this analysis process so that you could understand on a more global level what your model’s behavior is,” Angie Boggust, a co-author of the paper, said in the news release.  Shared Interest uses a technique that shows how a machine-learning model made a particular decision, known as saliency methods.
thumb_up Like (30)
comment Reply (1)
thumb_up 30 likes
comment 1 replies
S
Sophia Chen 5 minutes ago
If the model is classifying images, saliency methods highlight areas of an image that are important ...
C
If the model is classifying images, saliency methods highlight areas of an image that are important to the model when it makes its decision. Shared Interest works by comparing saliency methods to human-generated annotations. Researchers used Shared Interest to help a dermatologist determine if he should trust a machine-learning model designed to help diagnose cancer from photos of skin lesions.
If the model is classifying images, saliency methods highlight areas of an image that are important to the model when it makes its decision. Shared Interest works by comparing saliency methods to human-generated annotations. Researchers used Shared Interest to help a dermatologist determine if he should trust a machine-learning model designed to help diagnose cancer from photos of skin lesions.
thumb_up Like (30)
comment Reply (2)
thumb_up 30 likes
comment 2 replies
E
Ella Rodriguez 20 minutes ago
Shared Interest enabled the dermatologist to quickly see examples of the model’s correct and incor...
O
Oliver Taylor 7 minutes ago
In about half an hour, the dermatologist was able to decide whether or not to trust the model and wh...
W
Shared Interest enabled the dermatologist to quickly see examples of the model’s correct and incorrect predictions. The dermatologist decided he could not trust the model because it made too many predictions based on image artifacts rather than actual lesions. “The value here is that using Shared Interest, we are able to see these patterns emerge in our model’s behavior.
Shared Interest enabled the dermatologist to quickly see examples of the model’s correct and incorrect predictions. The dermatologist decided he could not trust the model because it made too many predictions based on image artifacts rather than actual lesions. “The value here is that using Shared Interest, we are able to see these patterns emerge in our model’s behavior.
thumb_up Like (33)
comment Reply (3)
thumb_up 33 likes
comment 3 replies
S
Sebastian Silva 12 minutes ago
In about half an hour, the dermatologist was able to decide whether or not to trust the model and wh...
D
Daniel Kumar 1 minutes ago

Measuring Progress

The work by MIT researchers could be a significant step forward for AI...
J
In about half an hour, the dermatologist was able to decide whether or not to trust the model and whether or not to deploy it,” Boggust said. The reasoning behind a model’s decision is important to both machine learning researcher and the decision-maker.
In about half an hour, the dermatologist was able to decide whether or not to trust the model and whether or not to deploy it,” Boggust said. The reasoning behind a model’s decision is important to both machine learning researcher and the decision-maker.
thumb_up Like (6)
comment Reply (0)
thumb_up 6 likes
E
<h2> Measuring Progress </h2> The work by MIT researchers could be a significant step forward for AI’s progress toward human-level intelligence, Ben Hagag, head of research at Darrow, a company that uses machine learning algorithms, said that told Lifewire in an email interview.&nbsp; “The reasoning behind a model’s decision is important to both machine learning researcher and the decision-maker,” Hagag said. “The former wants to understand how good the model is and how it can be improved, whereas the latter wants to develop a sense of confidence in the model, so they need to understand why that output was predicted.” But Hagag cautioned that the MIT research is based on the assumption that we understand or can annotate human understanding or human reasoning. “However, there is a possibility that this might not be accurate, so more work on understanding human decision-making is necessary,” Hagag added.

Measuring Progress

The work by MIT researchers could be a significant step forward for AI’s progress toward human-level intelligence, Ben Hagag, head of research at Darrow, a company that uses machine learning algorithms, said that told Lifewire in an email interview.  “The reasoning behind a model’s decision is important to both machine learning researcher and the decision-maker,” Hagag said. “The former wants to understand how good the model is and how it can be improved, whereas the latter wants to develop a sense of confidence in the model, so they need to understand why that output was predicted.” But Hagag cautioned that the MIT research is based on the assumption that we understand or can annotate human understanding or human reasoning. “However, there is a possibility that this might not be accurate, so more work on understanding human decision-making is necessary,” Hagag added.
thumb_up Like (19)
comment Reply (0)
thumb_up 19 likes
L
metamorworks / Getty Images Advances in AI could speed up the development of computers’ ability to understand language and revolutionize the way AI and humans interact, Buteneers said. Chatbots can understand hundreds of languages at a time, and AI assistants can scan bodies of text for answers to questions or irregularities.
metamorworks / Getty Images Advances in AI could speed up the development of computers’ ability to understand language and revolutionize the way AI and humans interact, Buteneers said. Chatbots can understand hundreds of languages at a time, and AI assistants can scan bodies of text for answers to questions or irregularities.
thumb_up Like (33)
comment Reply (3)
thumb_up 33 likes
comment 3 replies
T
Thomas Anderson 9 minutes ago
“Some algorithms can even identify when messages are fraudulent, which can help businesses and con...
E
Emma Wilson 8 minutes ago
Was this page helpful? Thanks for letting us know! Get the Latest Tech News Delivered Every Day Subs...
V
“Some algorithms can even identify when messages are fraudulent, which can help businesses and consumers alike to weed out spam messages,” Buteneers added. But, said Buteneers, AI still makes some mistakes that humans never would. “While some worry that AI will replace human jobs, the reality is we’ll always need people working alongside AI bots to help keep them in check and keep these mistakes at bay while maintaining a human touch in business,” he added.
“Some algorithms can even identify when messages are fraudulent, which can help businesses and consumers alike to weed out spam messages,” Buteneers added. But, said Buteneers, AI still makes some mistakes that humans never would. “While some worry that AI will replace human jobs, the reality is we’ll always need people working alongside AI bots to help keep them in check and keep these mistakes at bay while maintaining a human touch in business,” he added.
thumb_up Like (11)
comment Reply (0)
thumb_up 11 likes
M
Was this page helpful? Thanks for letting us know! Get the Latest Tech News Delivered Every Day
Subscribe Tell us why!
Was this page helpful? Thanks for letting us know! Get the Latest Tech News Delivered Every Day Subscribe Tell us why!
thumb_up Like (3)
comment Reply (1)
thumb_up 3 likes
comment 1 replies
K
Kevin Wang 4 minutes ago
Other Not enough details Hard to understand Submit More from Lifewire Your Car's Driving Assistance ...
I
Other Not enough details Hard to understand Submit More from Lifewire Your Car's Driving Assistance Tech Isn't Meant to Be Used Alone—Here's Why Your Next Flight Might Be More On-Time Thanks to AI What Is Artificial Intelligence? What Is a Neural Network? Artificial Intelligence vs.
Other Not enough details Hard to understand Submit More from Lifewire Your Car's Driving Assistance Tech Isn't Meant to Be Used Alone—Here's Why Your Next Flight Might Be More On-Time Thanks to AI What Is Artificial Intelligence? What Is a Neural Network? Artificial Intelligence vs.
thumb_up Like (34)
comment Reply (0)
thumb_up 34 likes
D
Machine Learning Why AI Needs to Be Regulated The Four Types of Artificial Intelligence AI Might Not Be Your Best Source for Advice Just Yet How AI Could Monitor Its Dangerous Offspring Artificial Intelligence Isn't Taking Over Anytime Soon, Right? Experts Wonder if AI Is Creating Its Own Language AI Discoveries Could Soon Power Your Car New Tech Could Let Gadgets Understand Your Conversations How Satellite Images Could Improve Lives AI Could Give 3D Printers New Capabilities AI Could Stop Snooping By Predicting What You’ll Say Newsletter Sign Up Newsletter Sign Up Newsletter Sign Up Newsletter Sign Up Newsletter Sign Up By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. Cookies Settings Accept All Cookies
Machine Learning Why AI Needs to Be Regulated The Four Types of Artificial Intelligence AI Might Not Be Your Best Source for Advice Just Yet How AI Could Monitor Its Dangerous Offspring Artificial Intelligence Isn't Taking Over Anytime Soon, Right? Experts Wonder if AI Is Creating Its Own Language AI Discoveries Could Soon Power Your Car New Tech Could Let Gadgets Understand Your Conversations How Satellite Images Could Improve Lives AI Could Give 3D Printers New Capabilities AI Could Stop Snooping By Predicting What You’ll Say Newsletter Sign Up Newsletter Sign Up Newsletter Sign Up Newsletter Sign Up Newsletter Sign Up By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. Cookies Settings Accept All Cookies
thumb_up Like (1)
comment Reply (3)
thumb_up 1 likes
comment 3 replies
S
Sofia Garcia 21 minutes ago
AI May Be Catching up With Human Reasoning GA S REGULAR Menu Lifewire Tech for Humans Newsletter! Se...
L
Liam Wilson 16 minutes ago
His writing has appeared in The Atlantic, the Guardian, the Los Angeles Times and many other publica...

Write a Reply