CES Panel: Impact of Evolving Tech on Autonomous Vehicles

Faye Francy, executive director of the Automotive Information Sharing and Analysis Center (Auto-ISAC), led a conversation about the impact of machine learning, deep learning and AI on the autonomous vehicle (AV) ecosystem. “They work together to bring great things — and possibly nefarious things — to the auto industry,” she said. Inivision AI chairman Seamus Hatch noted that the three terms aren’t interchangeable. “We’re many years behind the singularity,” he said. “It’s a machine trained to solve a specific problem faster and more accurately than a human.” Continue reading CES Panel: Impact of Evolving Tech on Autonomous Vehicles

We Were Passengers in a Las Vegas ‘Self-Driving’ Rideshare

Autonomous vehicles have been a part of tech culture for so long that it’s hard to realize that only a handful of people have actually ridden in one. So it was with great surprise that our very first Lyft ride out of our Las Vegas hotel on Sunday night was in a “self-driving” vehicle. Lyft partnered with Irish auto-parts-company-turned-autonomous-vehicle-startup Aptiv (formerly known as Delphi) to offer CES attendees and Vegas commuters the option to ride in one of their 30 “self-driving” BMW 5 Series. Continue reading We Were Passengers in a Las Vegas ‘Self-Driving’ Rideshare

Here’s What We Hope to See This Week at CES Related to AI

With the buzz way down, AI research more vibrant than ever, and more mainstream experimentation, there’s a lot to potentially look forward to at CES 2019 in the field of AI and machine learning. And already it all seems to converge on one very interesting trend: pragmatism. As AI exits the lab, and heads into the world, we’re expecting new and compelling applications. At CES this week, we’re hoping to see advances in areas such as autonomous vehicles, consumer robots, computer vision, smart assistants, and a more integrated Internet of Things. Continue reading Here’s What We Hope to See This Week at CES Related to AI

Hive Builds Tailored AI Models via 700,000-Person Workforce

Hive, a startup founded by Kevin Guo and Dmitriy Karpman, trains domain-specific artificial intelligence models via its 100 employees and 700,000 workers who classify images and transcribe audio. The company uses the Hive Work smartphone app and website to recruit the people who label the data, and recently introduced three products: Hive Data, Hive Predict, and Hive Enterprise. Shortly after the product launch, Peter Thiel’s Founders Fund and other venture capital firms invested $30 million in the startup. Continue reading Hive Builds Tailored AI Models via 700,000-Person Workforce

IBM, Harvard University Develop New Tool for AI Translation

At the IEEE Conference on Visual Analytics Science and Technology in Berlin, IBM and Harvard University researchers presented Seq2Seq-Vis, a tool to debug machine translation tools. Translation tools rely on neural networks, which, because they are opaque, make it difficult to determine how mistakes were made. For that reason, it’s known as the “black box problem.” Seq2Seq-Vis allows deep-learning app creators to visualize AI’s decision-making process as it translates a sequence of words from one language to another. Continue reading IBM, Harvard University Develop New Tool for AI Translation

Accounting, Finance Industries Demand Explainable AI Tools

As artificial intelligence-based tools become more widespread in the business industry, cloud service companies are debuting tools that explain the artificial intelligence algorithms they use to provide more transparency and assure users of their ethical behavior. That’s because regulated industries are demanding it. Capital One and Bank of America are just two such companies interested in using AI to improve detection of fraud, but want to know how the algorithms work before they implement such tools. Continue reading Accounting, Finance Industries Demand Explainable AI Tools

IBM Creates Machine-Learning Aided Watermarking Process

IBM now has a patent-pending, machine learning enabled watermarking process that promises to stop intellectual property theft. IBM manager of cognitive cybersecurity intelligence Marc Ph. Stoecklin described how the process embeds unique identifiers into neural networks to create “nearly imperceptible” watermarks. The process, recently highlighted at the ACM Asia Conference on Computer and Communications Security (ASIACCS) 2018 in Korea, might be productized soon, either within IBM or as a product for its clients. Continue reading IBM Creates Machine-Learning Aided Watermarking Process

Samsung Fund to Boost Startups with New Approaches to AI

Some startups are trying to create another form of AI than deep learning, to minimize the amount of training, data and server power needed. Samsung Next, the South Korean company’s venture capital unit, just launched the Q Fund to jumpstart this idea by funding companies focusing on new ways of developing artificial intelligence. One of Q Fund’s first investments is Vicarious, a startup that wants to give machines “imagination” and is inspired by biology to make machines learn more quickly. Continue reading Samsung Fund to Boost Startups with New Approaches to AI

The Best New Products Displayed at Augmented World Expo

Several demos stood out at the 9th annual Augmented World Expo in Santa Clara, California last week. The most compelling involved a holographic display from Brooklyn-based Looking Glass Factory. Co-founder and CEO Shawn Frayne and his team have been working for a few years on a technique that “blends the best of volumetric rendering and light field projection.” Also compelling was a markerless multi-person tracking system that runs off a single video feed, developed by a Canadian computer vision/deep learning company named wrnch. And marking its first exhibit in the United States since launching its latest satellite office in San Francisco this April, Japanese company Miraisens demonstrated how a suite of effects could be used to enhance extended reality experiences. Continue reading The Best New Products Displayed at Augmented World Expo

Nvidia Emphasizes Software at Technicolor Experience Event

At the Technicolor Experience Center in Culver City, Nvidia held an event highlighting its decisive move into software, with artificial intelligence, virtual reality and other areas. Vice president of developer programs Greg Estes noted that the company has 850,000 developers all over the world in universities and labs as well as companies like Adobe. Its developer program provides hands-on training in AI and parallel computing, impacting the media and entertainment industry, as well as smart cities, autonomous vehicles and more. Continue reading Nvidia Emphasizes Software at Technicolor Experience Event

Microsoft Reaches Out to Developers at its Build Conference

Microsoft revealed interesting news during this week’s Build developer conference in Seattle, Washington. Among the key announcements: a pair of mixed reality enterprise apps for the HoloLens; a partnership with DJI to bring Microsoft’s AI and machine learning tech to commercial drones; a preview launch of deep learning acceleration platform Project Brainwave; prototype hardware designed for the meeting room of the future; and Project Kinect for Azure, which provides developers with the opportunity to experiment with a package of sensors and Microsoft’s next-generation depth camera. Continue reading Microsoft Reaches Out to Developers at its Build Conference

Nvidia’s New AI Method Can Reconstruct an Image in Seconds

Nvidia debuted a deep learning method that can edit or reconstruct an image that is missing pixels or has holes via a process called “image inpainting.” The model can handle holes of “any shape, size, location or distance from image borders,” and could be integrated in photo editing software to remove undesirable imagery and replace it with a realistic digital image – instantly and with great accuracy. Previous AI-based approaches focused on rectangular regions in the image’s center and required post processing. Continue reading Nvidia’s New AI Method Can Reconstruct an Image in Seconds

ETC and NAGRA Partner on Fandom Genomics for myCinema

NAGRA, the digital TV division of the digital content and protection Kudelski Group, just debuted a partnership with ETC@USC to conduct a data science study — dubbed Fandom Genomics — for its newly launched myCinema platform. Debuted at CinemaCon 2018, myCinema is a broadband-based in-theater platform that providers a large catalog of alternative content titles to theater chains of all sizes and in any location, and is intended to help exhibitors reclaim the theater’s position as the social center of the community. Continue reading ETC and NAGRA Partner on Fandom Genomics for myCinema

NAB 2018: Artificial Intelligence Tools for Animation and VFX

Tools powered by artificial intelligence and machine learning can also be used in animation and visual effects. Nvidia senior solutions architect Rick Grandy noted that the benefit of such tools is that artists don’t have to replicate their own work. That includes deep learning used for realistic character motion created in real-time via game engines and AI, as well as a phase-functioned neural network for character control, whereby the network can be trained by motion capture or animation. Continue reading NAB 2018: Artificial Intelligence Tools for Animation and VFX

NAB 2018: Machine-Learning Tools to Become Vital for Editing

USC School of Cinematic Arts professor and editor Norman Hollyn spoke at a conference on machine learning about ML tools available today and those that are imminent for editing film/TV content. Underlying the growing importance of ML-powered tools for editors, Hollyn pointed out that editors who resisted the advent of digital nonlinear editing in the 1990s exited the industry. “AI is bringing things into the post production world and if we don’t start to look at and embrace them, we’ll be ex-editors,” he said. Continue reading NAB 2018: Machine-Learning Tools to Become Vital for Editing

Page 1 of 512345