One of the frustrations with machine learning, particularly in the area of image recognition, is that neural nets sometimes get things completely, laughably, inexplicably wrong. We question how an AI could be fed an image of an unmistakably dog-like dog and ascertain that it’s a pineapple. But new research from Johns Hopkins University, published in Nature Communications, demonstrates that there is a logic to these errors — one humans can intuitively understand, if pressed. Researchers Zhenglong Zhou and Chaz Firestone conducted a series of experiments in which they presented human participants with adversarial image sets — images that contain tiny errors designed to deceive a machine learning model — and asked them to predict the labels certain Convolutional Neural Networks (CNNs) had applied to the images. In some cases, the CNNs had overcome the adversarial images and correctly applied labels, but in other instances they had whiffed. The researchers wanted to … [Read more...] about Humans can predict how machines (mis)classify adversarial images
Generative AI models have a propensity for learning complex data distributions, which is why they’re great at producing human-like speech and convincing images of burgers and faces. But training these models requires lots of labeled data, and depending on the task at hand, the necessary corpora are sometimes in short supply. The solution might lie in an approach proposed by researchers at Google and ETH Zurich. In a paper published on the preprint server Arxiv.org (“High-Fidelity Image Generation With Fewer Labels“), they describe a “semantic extractor” which can pull out features from training data, along with methods of inferring labels for an entire training set from a small subset of labeled images. These self- and semi-supervised techniques together, they say, can outperform state-of-the-art methods on popular benchmarks like ImageNet. “In a nutshell, instead of providing hand-annotated ground truth labels for real images to the … [Read more...] about Researchers are training image-generating AI with fewer labels
Bias is a well-established problem in artificial intelligence (AI): models trained on unrepresentative datasets tend to be impartial. It’s a tougher challenge to solve than you might think, particularly in image classification tasks where racial, societal, and ethnic prejudices frequently rear their ugly heads. In a crowdsourced attempt combat the problem, Google in September partnered with NeurIPS competition track to launch the Inclusive Images Competition, which challenged teams to use Open Images — a publicly available dataset of 900 labeled images sampled from North America and Europe — to train an AI system evaluated on photos collected from regions around the world. It’s hosted on Kaggle, Google’s data science and machine learning community portal. Tulsee Doshi, a product manager at Google AI, gave a progress update on Monday morning during a presentation on algorithmic fairness. “[Image classification] performance … has [been] improving … [Read more...] about Google’s Inclusive Images Competition spurs development of less biased image classification AI
Supported by Technology Bits By KEVIN ROOSE APRIL 6, 2018 Continue reading the main story Share This Page Continue reading the main story Each week, Kevin Roose, technology columnist at The New York Times, discusses developments in the tech industry, offering analysis and maybe a joke or two. Want this newsletter in your inbox? Sign up here.It’s hard to imagine now, but at one point, long ago, Facebook did not monopolize the entire tech news cycle — a heady and innocent era when you could read an entire day’s news without encountering the words “Cambridge Analytica” or “third-party developers.”I confess that, like many of you, I have been obsessed with the fallout from Facebook’s latest privacy scandal, to the point that I had a stress dream that I overslept and missed covering Mark Zuckerberg’s testimony on Capitol Hill next week. (Related: I need to … [Read more...] about Kevin’s Week in Tech: Extra! Extra! News Beyond Facebook!
In early 2016, Microsoft launched Tay, an AI chatbot that was supposed to mimic the behavior of a curious teenage girl and engage in smart discussions with Twitter users. The project would display the promises and potential of AI-powered conversational interfaces. However, in less than 24 hours, the innocent Tay became aracist, misogynist and a holocaust denying AI, debunking—once again—the myth of algorithmic neutrality. For years, we’ve thought that artificial intelligence doesn’t suffer from the prejudices and biases of its human creators because it’s driven by pure, hard, mathematical logic. However, as Tay and several other stories have shown, AI might manifest the same biases as humans, and in some cases, it might even be worse. The phenomenon, known as “algorithmic bias,” is rooted inthe way AI algorithms work and is becoming more problematic as software becomes more and more prominent in every decision we make. The roots of … [Read more...] about Stopping racist AI is as difficult as stopping racist people