“We’re in a diversity crisis”: cofounder of Black in AI on what’s poisoning algorithms in our lives

 poisoning algorithms

poisoning algorithms

Timnit Gebru looks around the AI world and sees almost no one who looks like her. That’s a problem for all of us.

Artificial intelligence is an increasingly seamless part of our everyday lives, present in everything from web searches to social media to home assistants like Alexa. But what do we do if this massively important technology is unintentionally, but fundamentally, biased? And what do we do if this massively important field includes almost no black researchers? Timnit Gebru is tackling these questions as part of Microsoft’s Fairness, Accountability, Transparency, and Ethics in AI group, which she joined last summer. She also cofounded the Black in AI event at the Neural Information Processing Systems (NIPS) conference in 2017 and was on the steering committee for the first Fairness and Transparency conference in February. She spoke with MIT Technology Review about how bias gets into AI systems and how diversity can counteract it.

How does the lack of diversity distort artificial intelligence and specifically computer vision?

I can talk about this for a whole year. There is a bias to what kinds of problems we think are important, what kinds of research we think are important, and where we think AI should go. If we don’t have diversity in our set of researchers, we are not going to address problems that are faced by the majority of people in the world. When problems don’t affect us, we don’t think they’re that important, and we might not even know what these problems are, because we’re not interacting with the people who are experiencing them.

Are there ways to counteract bias in systems?

The reason diversity is really important in AI, not just in data sets but also in researchers, is that you need people who just have this social sense of how things are. We are in a diversity crisis for AI. In addition to having technical conversations, conversations about law, conversations about ethics, we need to have conversations about diversity in AI. We need all sorts of diversity in AI. And this needs to be treated as something that’s extremely urgent.

From a technical standpoint, there are many different kinds of approaches. One is to diversify your data set and to have many different annotations of your data set, like race and gender and age. Once you train a model, you can test it out and see how well it does by all these different subgroups. But even after you do this, you are bound to have some sort of bias in your data set. You cannot have a data set that perfectly samples the whole world.

Something I’m really passionate about and I’m working on right now is to figure out how to encourage companies to give more information to users or even researchers. They should have recommended usage, what the pitfalls are, how biased the data set is, etc. So that when I’m a startup and I’m just taking your off-the-shelf data set or off-the-shelf model and incorporating it into whatever I’m doing, at least I have some knowledge of what kinds of pitfalls there may be. Right now we’re in a place almost like the Wild West, where we don’t really have many standards [about] where we put out data sets.

Read More Here

Article Credit: Technology Review

Go to Source

Should AI bots lie? Hard truths about artificial intelligence

AI will revolutionize the world, or so sayeth Silicon Valley. But there are some potholes on the road to AI nirvana — starting with the people AI is supposed to help. Think Skynet. Here’s research from the frontlines of artificial intelligence.

 AI bots

AI bots

People working in teams do things — such as telling white lies — that can help the team be successful. We accept that, usually, when a person does it. But what if an AI bot is telling the lie, or being told a lie?

More importantly, if we allow bots to tell people lies, even white lies, how will that affect trust? And if we do give AI bots permission to lie to people, how do we know that their lies are helpful to people instead of the bot?

Computer scientists Tathagata Chakraborti and Subbarao Kambhampati of Arizona State University, discuss effective collaboration between humans and AI in a recent paper, Algorithms for the Greater Good!. They point out that it’s not enough to make the AI smart. AI devs have to make sure the AI bot works well with human intelligence, in all its wild variety, including different cultural norms, if we are to avoid serious problems.

They frame the issue this way:

Effective collaboration between humans and AI-based systems requires effective modeling of the human in the loop . . . . However, these models [of the human] can also open up pathways for manipulating and exploiting the human. . . when the intent or values of the AI and the human are not aligned or when they have an asymmetrical relationship with respect to knowledge or computation power

If IBM, Intel, and Nvidia have their way there will be an ever growing “asymmetrical relationship with respect to knowledge or computation power.” A bot might have a couple of thousand drones surveying several square kilometers, or an exabyte of relevant history and context. Or both.

I, AI. YOU, MEAT PUPPET.

The researchers designed a thought experiment to explore, human-human, and human-AI interactions in an urban search and rescue scenario: searching a floor of an earthquake-damaged building. They enlisted 147 people on Mechanical Turk to survey how human reactions change between dealing with humans or AI.

The scenarios involved different kinds of influence, including belief shapingmodel differences, and stigmergic collaboration.

These aren’t theoretical issues. For example, a doctor’s Hippocratic oath includes a promise to conceal “. . . most things from the patient while you are attending to him.” This is done for the good of patient, but what if it is a medical AI that is concealing information from a patient, or a doctor?

There is a lot to the paper, but the issue I found most concerning is that many people are OK with lying to an AI and, likewise, OK with being lied to by an AI.

And conversely, many aren’t. How is an AI developer supposed to model THAT?

Read More Here

Article Credit: ZDNet

Go to Source

Fake news is still a problem. Is AI the solution?

Fake news

Fake news

Human fact-checkers can’t keep up with the flood of fraudulent stories, images, and videos.

Fake news is fueled in part by advances in technology — from bots that automatically fabricate headlines and entire stories to computer software that synthesizes Donald Trump’s voice and makes him read tweets to a new video editing app that makes it possible to create authentic-looking videos in which one person’s face is stitched onto another person’s body.

But technology, in the form of artificial intelligence, may also be the key to solving the fake news problem — which has rocked the American political system and led some to doubt the veracity even of reports from long-trusted media outlets.

Experts say AI systems would help fill the gaps left by SnopesTruth or Fiction, and other online fact-checking outlets, whose human fact-checkers lack the bandwidth to evaluate every article that appears online. These systems could also work with various fake news alert plugins available from Google’s web store, such as the browser extension This is Fake, which uses a red banner to flag debunked news stories on your Facebook newsfeed.

“All of the current systems for tracking fake news are manual, and this is something we need to change as the earlier you can highlight that a story is fake, the easier it is to prevent it going viral,” says Delip Rao, founder of the San Francisco-based AI research company Joostware and organizer of the Fake News Challenge, a competition set up within the AI community to foster development of tools that can reliably spot fake content.

FIGHTING THE FAKERS

At last month’s World Economic Summit in Davos, Switzerland, Google and Facebook announced plans to develop AI systems that would notify users about dubious content. Google has floated the idea of a “misinformation detector” browser extension that would alert users if they land on a link deemed untrustworthy.

But while these plans have yet to be put into action, an Israeli startup company called AdVerif.ai has already begun fighting back against the fakers.

Read More Here

Article Credit: NBC News

Go to Source

Slow to gain traction, AI apps on the verge of explosion

AI apps

AI apps

From chatbots (“Can I help you?”) to killer bots (“I’ll be back.”), artificial intelligence runs the gamut of applications and emotions like no other technology. It’s been nearly 70 years since AI first came into consciousness with humankind, yet only recently has it made serious inroads into business and consumer markets, mainly riding the coattails of its offshoot, machine learning.

AI’s other offspring — deep learning, cognitive computing, image recognition and natural language processing — show plenty of promise but have barely left the womb. Yet market research and industry foot soldiers tell us AI is on the cusp of explosive growth in practically every major industry.

Along those lines, the February issue of Business Information opens with our editor’s note, which provides a good dose of common sense and advice for businesses not to throw caution to the whirlwind of hyperbole surrounding AI. Companies anxious to apply AI, particularly machine learning, to their operations for fear they’ll fall behind the competition, must first do their homework and separate fact from fiction, all the while keeping in mind that implementing AI apps is difficult and requires time, expense and the right kind of data.

Our cover story takes the issue of good data one step further and examines the very real dangers that bias in machine learning data sets can create. Data scientists have their work cut out for them identifying and pinpointing bias, especially since nearly all data inherently contains bias. In another feature, we look at the usefulness of today’s AI tools, which, except for some niche AI apps, are not very useful, but they come with a whole lot of promise as a soon-to-be transformative force in business operations.

Also in this issue, a major components manufacturer uses AI apps to transform its assembly line process, cognitive computing can help improve a doctor’s bedside manner, the benefits of AI and machine learning entice healthcare organizations to move to the cloud and humankind’s range of reactions to AI’s unlimited potential parallel those of the atomic age.

Read More Here

Article Credit: TechTarget

Go to Source

How ‘Big Data’ Might Help Predict Which Children Are Most At Risk For Abuse

Currently, 3 million kids are investigated for maltreatment every year — 700,000 of those cases are substantiated

Children Are Most At Risk

Children Are Most At Risk

Each day, social workers must decide whether or not the children they visit should be removed from their parents’ homes. It’s a decision that changes the courses of those kids’ lives.

During a recent episode of  KERA’s “Think,” Naomi Schaefer Riley, a visiting fellow at the American Enterprise Institute, talked about how we can better harness statistical information to help make these decisions.

The idea of using “big data” to analyze risk isn’t a brand new concept, but better algorithms are allowing it to be used in new ways.

On the “overwhelming” number of kids in the system

Currently, 3 million kids are investigated for maltreatment every year — 700,000 of those cases are substantiated.

Riley says about one in three children younger than 18 have had some contact with the child welfare system.

“I think we can safely say that a lot of those cases are obviously not substantiated,” she says. “It means that a lot of kids are having contact with child welfare when there’s absolutely no need for it.”

Workers in the American child welfare system are being completely overwhelmed, she says.

“There’s basically a fire hose of reports being thrown at them and the question is: How can we really expect them reasonably to sort through them?” she says.

On using predictive analytics to determine abuse risk

Data on families is available from various institutions, like schools and the welfare and health care systems. In the past few years, the systems have started talking to each other.

“Suddenly, people who were looking at child welfare could actually access data from schools and could actually access data from medical records or data from welfare benefits,” she said.

Riley says this data can then be used in an algorithm to generate a “score” for a family, measuring the likelihood that a child would be subject to abuse.

“Once a score is spit out, it’s not ‘Oh, they scored a 10; let’s go remove the child from the home.’ Rather it’s actually a way of determining for a case worker how urgent this case is to be looked at,” she said.

On the concern this data would lead to over-reporting in minority communities

Poverty correlates with child abuse and neglect, and minority neighborhoods tend to be poorer. The risk of abuse increases with economic instability in the home, Riley says. Parents and children tend to have less stable relationships when they’re stressed over money.

Read More Here

Article Credit: Houston Public Media

Go to Source

Smart speakers are now the fastest-growing consumer technology ahead of AR, VR, and wearables

 fastest-growing consumer technology

fastest-growing consumer technology

More than 56 million smart speakers will be shipped out in 2018. From hardware adoption, the market will move towards services and consumer engagement. 

Consumer tech has been bitten by a new bug — smart speakers. These are said to be growing in adoption faster than all other recent technologies viz. AR, VR, and even wearables. Tech analysts say 2017 was a “banner year” for smart speakers with record hardware sales driven by Amazon and Google.

But, 2018 is going to be bigger and better. More than 56 million smart speakers are estimated to be shipped this year, according to a Canalys forecast. That amounts to a whopping 10X growth since 2016, when only five to six million smart speakers were shipped.

While last year was about driving orders and installations, 2018 would witness increasing consumer engagement with the devices. Smart speakers as a segment will move beyond hardware as manufacturers will look to monetise the installed base. Amazon’s Echo line of speakers and Google’s Home products will take the lead again.

Consumer acceptance of smart speakers has been rapid, especially in the US. However, future growth is slated to come from markets like Western Europe and Asia (barring China, which has banned Amazon and Google again) too.

Rising broadband penetration coupled with declining prices of smart speakers and the increasing integration of AI technologies is accelerating adoption world over.

The forecast states, “Vendors have begun offering successful upgrades to their latest models, and a key element driving this stickiness are the smart home partnerships. Alexa’s multiple smart home integrations, Google’s partnership with Nest and Apple’s HomeKit initiatives will continue to excite consumers of the smart speaker and fuel sales in 2018.”

Main players

Amazon, of course, occupies a lion’s share of smart speaker sales. In 2017, it occupied over 70 percent of the market. Google Home was a distant second with a 24 percent share. Amazon is expected to generate $10 billion in additional revenues from the sale of Echo devices by 2020. Jeff Bezos announced in a recent earnings call that Amazon would “double down” on investments in Alexa-powered devices.

Read More Here

Article Credit: YourStory

Go to Source

Bill Gates: It’s ‘scary to me’ that technology can empower small groups to do great harm

Bill Gates

Bill Gates

Microsoft co-founder Bill Gates revealed what scares him about the advance of technology in an interview with Axios.

“There’s always the question how much technology is empowering a small group of people to cause damage,” Gates told the news site. “A small group can have an impact — in the case of nuclear [weapons], on millions; and in the case of bio[terror], on billions. That is scary to me.”

He also issued a warning to tech companies, as the threat of government regulation hangs over Silicon Valley. “The companies need to be careful that they’re not … advocating things that would prevent government from being able to, under appropriate review, perform the type of functions that we’ve come to count on,” he told Axios.

Tech giants FacebookApple and Alphabet‘s Google have come under growing criticism in recent months for what some see as a failure to self-regulate and mitigate the negative effects of technology on society.

Facebook and Google have struggled to keep inappropriate or hateful content off their sites, and Apple has clashed with law enforcement agencies over granting access to the personal data of suspected criminals stored on iPhones.

Gates attributed that to the companies’ “enthusiasm about making financial transactions anonymous and invisible, and their view that even a clear mass-murdering criminal’s communication should never be available to the government.”

Read More Here

Article Credit: CNBC

Go to Source

New initiatives to boost technology incubators

New initiatives

New initiatives

The government has decided to set up 15 new biotechnology incubators and another 15 new technology business incubators during the coming financial year.

The new initiatives spelt out in the budget is designed to help translate new technologies developed by biotech and other companies into products of use to society. Addressing a press conference, Minister for Science and Technology and Earth Sciences, Dr. Harsh Vardhan, noted that apart from the incubators, the Biotechnology Industry Research Assistance Council (BIRAC) will support setting up of 3,000 additional start-ups in different parts of the country.

He said the budget allocation for science and technology has been increasing since 2014, when the present government took office. “The allocation for Department of Science and Technology (DST) for the last five years has witnessed a whopping 90 percent increase over preceding five years (2009-10 to 2013-14), followed by a 65 percent increase for Department of Biotechnology (DBT), 43 percent for Council of Scientific and Industrial Research (CSIR) and 26 percent for the Ministry of Earth Sciences (MoES)”.

Asked about reports of fund shortage faced by CSIR last year, he agreed that there was an issue because the Council had to implement the Seventh Pay Commission recommendations for its employees. But, this year, that problem will not be there.

He noted that CSIR, as well as DST, DBT and MoES, have been allocated more funds this year. The increase is eight percent for DST, 3.56 percent for CSIR, 6.7 percent for DBT, and 12.7 percent for MoES.

He noted that among other new initiatives, the India Meteorological Department under MoES would provide agrometeorological advisory to 50 million farmers by the end of the year, against 24 million farmers at present. In addition, a new Mission for Cyber-Physical-Systems will be launched.

Read More Here

Article Credit: FirstPost

Go to Source

Walmart goes to the cloud to close gap with Amazon

Walmart

Walmart

One of Walmart’s best chances at taking on Amazon.com in e-commerce lies with six
giant server farms, each larger than ten football fields.

These facilities, which cost Walmart millions of dollars and took nearly five years to build, are starting to pay off. The retailer’s online sales have been on a tear for the last three consecutive quarters, far outpacing wider industry growth levels.

Powering that rise are thousands of proprietary servers that enable the company to crunch almost limitless swathes of customer data in-house.

Most retailers rent the computing capacity they need to store and manage such information. But Walmart’s decision to build its own internal cloud network shows its determination to grab a bigger slice of online shopping, in part by imitating Amazon’s use of cloud-powered big data to drive digital sales.

The effort is helping Walmart to stay competitive with Amazon on pricing and to tightly control key functions such as inventory. And it is allowing the company to target shoppers with more customized offers and improved services, two top executives told Reuters in interviews at Walmart’s San Bruno and Sunnyvale campuses in California.

“It has made a big difference to how fast we can grow our e-commerce business,” said Tim Kimmet, head of cloud operations for Walmart.

He said Walmart, for example, is using cloud data to stock items frequently ordered by customers via voice shopping devices such as Google Home.

The network is helping the retailer improve its in-store operations as well.

Using data gleaned from millions of transactions, the company sped up the process by which customers can return online purchases to their local stores by 60 percent. And Walmart can adjust prices at its physical locations almost instantly across entire regions.

“We are now able to execute change faster,” Jeremy King, Walmart’s chief technology officer, told Reuters. He added that Walmart can now make over 170,000 monthly changes to software that supports its website, compared to less than 100 changes previously.

To be sure, Walmart, the world’s largest brick-and-mortar retailer, holds just a 3.6 percent share of the U.S. e-commerce market compared to Amazon’s 43.5 percent, according to digital research firm eMarketer.

Still, Walmart’s cloud effort is significant at a time when U.S. retail is undergoing immense disruption, and data-based decision making has become more important than ever to understand how shoppers make purchases.

Read More Here

Article Credit: CNBC

Go to Source

Cybersecurity is greatest concern at Senate threats hearing

At the Senate Intelligence Committee’s annual “Worldwide Threats” hearing, the top US intelligence agencies put technology front and center.

Cybersecurity is greatest concern

Cybersecurity is greatest concern

For the top intelligence agencies in the US, technology has pushed aside terrorism as a top national security threat.

The leaders of six of those agencies, including the CIA, the NSA and the FBI, testified before the Senate Intelligence Committee on Tuesday, during its annual “Worldwide Threats” hearing. They discussed concerns ranging from terrorist attacks to nuclear strikes, but a major portion of the hearing was dedicated to discussing threats coming from technology.

Director of National Intelligence Dan Coats said in his opening statement that cybersecurity is his “greatest concern” and “top priority,” putting it ahead of threats like weapons of mass destruction and terrorism.

“From US businesses to the federal government to state and local governments, the United States is threatened by cyberattacks every day,” Coats said.

Those worries aren’t new. In December, President Donald Trump issued a national security strategy document that described cybersecurity as a top priority, citing threats including hackers from criminal enterprises and from places like Russia, China and Iran. That declaration came at the end of a long year awash in online security issues, from the WannaCry ransomware attack to probes into the hacking of critical infrastructure to revelations of Russian misinformation campaigns waged via social media.In his opening statement, Sen. Mark Warner, the committee’s vice chairman, highlighted his concerns about Russians spreading propaganda through Facebook, Google and Twitter, an issue the Democrat from Virginia has pressed the Silicon Valley tech titans on before.

Warner called out Russian bots and trolls and their potential to affect future elections.

“This is a dangerous trend,” he said. “This campaign of innuendo and misinformation should alarm us all, Republican and Democrat alike.”

Coats also described threats from foreign propaganda online, pointing out that it’s a low-cost and low-risk avenue for attackers. He told the committee that Russian operatives viewed the propaganda campaign during the 2016 election as a success, and warned it would continue.

“There is no doubt that Russia sees the 2018 elections as a target,” Coats said.

The fact that Coats started the discussion with cybersecurity, Warner said, was “very telling in terms of how we view worldwide threats.”

Read More Here

Article Credit: CNET

Go to Source