UK High Court Dismisses Appeal to Classify AI as an Inventor

Under the Patents Act, a UK court ruled that creator Stephen Thaler’s “Creativity Machine” called DABUS could not be an inventor. Thaler appealed, and the UK’s High Court dismissed it, saying an inventor must be a person and not a machine. Thaler, however, insists that DABUS is “fundamentally different from other AI systems,” noting that, via “simple learning rules” it combines “swarms of many artificial neural nets, each containing interrelated patterns spanning some conceptual space … with no predetermined objective.” Continue reading UK High Court Dismisses Appeal to Classify AI as an Inventor

AI-Powered Movies in Progress, Writing Makes Major Strides

In the not-so-distant future there will likely be services that allow the user to choose plots, characters and locations that are then fed into an AI-powered transformer with the result of a fully customized movie. The idea of using generative artificial intelligence to create content goes back to 2015’s computer vision program DeepDream, thanks to Google engineer Alexander Mordvintsev. Bringing that fantasy closer to reality is the AI system GPT-3 that creates convincingly coherent and interactive writing, often fooling the experts. Continue reading AI-Powered Movies in Progress, Writing Makes Major Strides

Facebook Reveals New AI-Powered Text-to-Speech System

Facebook introduced an AI text-to-speech system (TTS) that produces a second of audio in 500 milliseconds. According to Facebook, the system, which is used with a new approach to data collection, powered the creation of a British accent-inflected voice in six months, versus over a year required for other voices. The TTS is now used for Facebook’s Portal smart display brand. The system can be hosted in real time via ordinary processors and is also available as a service for other apps, including Facebook’s VR. Continue reading Facebook Reveals New AI-Powered Text-to-Speech System

Intel to Unveil Experimental Neuromorphic Computing System

Intel will debut Pohoiki Springs, an experimental research system for neuromorphic computing that simulates the way human brains work and computes more quickly and with less energy. It will first be made available, via the cloud, to the Intel Neuromorphic Research Community, which includes about a dozen companies (such as Accenture and Airbus), academic researchers and government labs. Intel and Cornell University jointly published a paper on the Loihi chip’s ability to learn and recognize 10 hazardous materials from smell. Continue reading Intel to Unveil Experimental Neuromorphic Computing System

Facebook’s 3D Photos Now Available for All Latest Handsets

Facebook’s 3D Photos feature — which uses depth data to create images that can be examined from different angles via virtual reality headsets — is now available on any of the latest handsets with a single camera, including Apple iPhone 7 or higher or any midrange (and above) Android phone. According to Facebook, the latest in machine learning techniques has made this feature possible. The company first unveiled 3D Photos in late 2018, when it required either a dual-camera phone or a depth map file on the desktop. Continue reading Facebook’s 3D Photos Now Available for All Latest Handsets

HPA Tech Retreat: ETC Immersive Media Challenge Explained

ETC’s immersive media head Phil Lelyveld presented a session describing the organization’s third Immersive Media Challenge — this one with a 5G twist. “The challenge is to ask students and recent graduates to come up with an idea for an engaging experience that is impossible to build now that should be possible to build in three to five years,” he said. “It’s not a hackathon. If you can build it in three to five years, you should probably start building it now. If it’s longer than five years, it’s Fantasyland.” Continue reading HPA Tech Retreat: ETC Immersive Media Challenge Explained

Researchers Create AI Technique to Generate Video Captions

Researchers at Microsoft Research Asia and the Harbin Institute of Technology have come up with a new technique to use artificial intelligence to generate live video captions. In the past, technologists have used encoder-decoder models, but didn’t model the interaction between videos and comments, resulting in mainly irrelevant comments. The new technique — based on a model that iteratively learns to capture the representations of audio, video and comments — outperforms current methods, according to the research team. Continue reading Researchers Create AI Technique to Generate Video Captions

AI Regulation’s First Testing Ground Is the European Union

Artificial intelligence and its potential to harm consumers has been much in the spotlight — now, more than ever, in Europe. Several Big Tech executives are in Europe, prior to heading to Davos for the annual World Economic Forum, and some, such as Microsoft president Brad Smith, are meeting with the European Union’s new competition chief Margrethe Vestager. Under the European Commission’s new president Ursula von der Leyen, new rules regulating free flow of data and competition are under consideration. Continue reading AI Regulation’s First Testing Ground Is the European Union

Google Bypasses Cloud to Offer AI to Enterprise Customers

AI can enable many important tasks from manufacturing to medicine, but only if the applications are speedy and secure. Communication via the cloud adds latency and risks privacy, which is why Google worked on a solution — dubbed Coral — that avoids centralized data centers. Coral product manager Vikram Tank described Coral as a “platform of [Google] hardware and software components … that help you build devices with local AI — providing hardware acceleration for neural networks … right on the edge device.” Continue reading Google Bypasses Cloud to Offer AI to Enterprise Customers

CES 2020: Location-Based AI is Enabling an Efficient Future

Location-based data is key to many of the efficiencies promised in smart, AI-enabled cities. HERE Technologies got its start in location data in 1985 when, as Navteq and later Nokia, its goal was to digitize mapping and pioneer in-car navigation. In 2015, HERE was sold to a consortium of German automakers and currently has nine direct and indirect shareholders. The company now creates 3D maps and other location-based solutions. During CES, HERE senior VP development & CTO Giovanni Lanfranchi described how the company ran a hackathon in Istanbul that challenged ordinary citizens to come up with new location-based solutions. Continue reading CES 2020: Location-Based AI is Enabling an Efficient Future

The Human Interface: What We Expect From AI at CES 2020

We’re not going to lie: the annual “heads up CES” piece on artificial intelligence is a major exercise in hit or miss. This is because technology rarely evolves on an annual time scale, and certainly not advanced technology like AI. Yet, here we are once again. Sure, 2019 was as fruitful as it gets in the AI research community. The raw debate between Neural Networks Extremists (those pushing for an “all neural nets all the time” approach to intelligence) and the Fanatical Symbolists (those advocating a more hybrid approach between knowledge bases, expert systems and neural nets) took an ugly “Mean Girl” turn, with two of the titans of the field (Gary Marcus and Yann LeCun) trading real insults on Twitter just a few days ago.  Continue reading The Human Interface: What We Expect From AI at CES 2020

Google and IBM Create Advanced Text-to-Speech Systems

Both IBM and Google recently advanced development of Text-to-Speech (TTS) systems to create high-quality digital speech. OpenAI found that, since 2012, the compute power needed to train TTS models has exploded to more than 300,000 times. IBM created a much less compute-intensive model for speech synthesis, stating that it is able to do so in real-time and adapt to new speaking styles with little data. Google and Imperial College London created a generative adversarial network (GAN) to create high-quality synthetic speech. Continue reading Google and IBM Create Advanced Text-to-Speech Systems

SuperGLUE Is Benchmark For Language-Understanding AI

Researchers recently introduced a series of rigorous benchmark tasks that measure the performance of sophisticated language-understanding AI. Facebook AI Research with Google’s DeepMind, University of Washington and New York University introduced SuperGLUE last week, based on the idea that deep learning models for today’s conversational AI require greater challenges. SuperGLUE, which uses Google’s BERT representational model as a performance baseline, follows the 2018 introduction of GLUE (General Language Understanding Evaluation), and encourages the creation of models that can understand more nuanced, complex language. Continue reading SuperGLUE Is Benchmark For Language-Understanding AI

Privacy Concerns Grow Over Facial Recognition Data Sets

Social networks, dating services, photo websites and surveillance cameras are just some of the sources of a growing number of databases compiling people’s faces. According to privacy advocates, Microsoft and Stanford University are among the many groups gathering images, with one such repository holding two million images. All these photos will be used to allow neural networks to build pattern recognition, in the quest to create cutting edge facial recognition platforms. Some companies have collected images for 10+ years. Continue reading Privacy Concerns Grow Over Facial Recognition Data Sets

Google Scientists Generate Realistic Videos at Scale with AI

Google research scientists report that they have produced realistic frames from open source video data sets at scale. Neural networks are able to generate complete videos from only a start and end frame, but it’s the complexity, information density and randomness of video that have made it too challenging to create such realistic clips at scale. The scientists wrote that, to their knowledge, “this is the first promising application of video-generation models to videos of this complexity.” The systems are based on a neural architecture known as Transformers, as described in a Google Brain paper, and are autoregressive, “meaning they generate videos pixel by pixel.” Continue reading Google Scientists Generate Realistic Videos at Scale with AI