Intel Unveils Neuromorphic, Self-Learning Chip Codenamed Loihi

Loihi

Intel has unveiled a new, specialized compute core designed for AI, deep learning, and neural networks. Meet Loihi.

The post Intel Unveils Neuromorphic, Self-Learning Chip Codenamed Loihi appeared first on ExtremeTech.

The original article can be found here: https://www.extremetech.com/computing/256467-intel-unveils-new-neuromorphic-self-learning-chip-codenamed-loihi?source=Computing

Powered by WPeMatico

Is BabyX the Future of Silicon-Based Life Forms?

babyx

Mark Sagar, the creator of BabyX, has chosen no less a goal than simulating the neural machinery of an infant human in silico.

The post Is BabyX the Future of Silicon-Based Life Forms? appeared first on ExtremeTech.

The original article can be found here: https://www.extremetech.com/computing/255856-babyx-future-silicon-based-life-forms?source=Computing

Powered by WPeMatico

Microsoft, Facebook Unveil Open Standard for AI, Deep Learning Networks

Cortana-Feature

Microsoft and Facebook are teaming up, launching a new neural network standard. Models built using the ONNX standard will be able to move between various frameworks rather than being limited to a single solution.

The post Microsoft, Facebook Unveil Open Standard for AI, Deep Learning Networks appeared first on ExtremeTech.

The original article can be found here: https://www.extremetech.com/computing/255297-facebook-microsoft-announce-new-open-standard-ai-deep-learning-networks?source=Computing

Powered by WPeMatico

Brain-like neural networks study space-time distortions at breakneck speed

Researchers have used brain-like “neural networks” to analyze key distortions in space-time 10 million times faster than conventional methods can do so.

The original article can be found here: http://www.foxnews.com/tech/2017/08/30/brain-like-neural-networks-study-space-time-distortions-at-breakneck-speed.html

Powered by WPeMatico

New Movidius Myriad X VPU Packs a Custom Neural Compute Engine

Movidius’ new Myriad X packs a specialized deep neural network engine, more of the company’s vision processing SHAVE CPU cores, and expanded interconnect support, all while sticking to the Myriad 2’s power consumption.

The post New Movidius Myriad X VPU Packs a Custom Neural Compute Engine appeared first on ExtremeTech.

The original article can be found here: https://www.extremetech.com/computing/254772-new-movidius-myriad-x-vpu-packs-custom-neural-compute-engine?source=Computing

Powered by WPeMatico

Microsoft’s Next HoloLens Will Contain an AI Coprocessor

HoloLens-Feature

Microsoft’s next-generation HPU is going to pack an AI coprocessor for creating deep neural networks, all while remaining a discrete, battery-powered device.

The post Microsoft’s Next HoloLens Will Contain an AI Coprocessor appeared first on ExtremeTech.

The original article can be found here: https://www.extremetech.com/computing/252998-microsoft-second-generation-hololens-ai-coprocessor?source=Computing

Powered by WPeMatico

Microsoft HoloLens 2 headset will include an artificial intelligence chip

Microsoft designed an AI chip to allow its mixed reality headset to implement Deep Neural Networks without the cloud.

The original article can be found here: http://www.foxnews.com/tech/2017/07/24/microsoft-hololens-2-headset-will-include-artificial-intelligence-chip.html

Powered by WPeMatico

MIT Research Could Make Neural Networks Portable

MIT-Portable-Neural_0 network

Having a neural network locally on your mobile device could be hugely beneficial, and it may be possible thanks to research from a team at MIT.

The post MIT Research Could Make Neural Networks Portable appeared first on ExtremeTech.

The original article can be found here: https://www.extremetech.com/extreme/252689-mit-research-make-neural-networks-portable?source=Computing

Powered by WPeMatico

Frighteningly accurate 'mind reading' AI reads brain scans to guess what you're thinking

Carnegie Mellon University researchers have developed a deep learning neural network that’s able to read complex thoughts based on brain scans — even interpreting complete sentences.

The original article can be found here: http://www.foxnews.com/tech/2017/06/29/frighteningly-accurate-mind-reading-ai-reads-brain-scans-to-guess-what-youre-thinking.html

Powered by WPeMatico

MIT makes breakthrough in morality-proofing artificial intelligence

Cortana

In groundbreaking work on morality-proofing AI, researchers at MIT are designing neural networks that will provide explanations for why they reached a conclusion.

The post MIT makes breakthrough in morality-proofing artificial intelligence appeared first on ExtremeTech.

The original article can be found here: https://www.extremetech.com/computing/238611-mit-makes-breakthrough-morality-proofing-artificial-intelligence?source=Computing

Powered by WPeMatico

IBM’s resistive computing could massively accelerate AI — and get us closer to Asimov’s Positronic Brain

AI artificial intelligence neural networks
Neural networks have enabled a revolution in machine learning. IBM researchers show how resistive computing can be used to make them massively more powerful.

The original article can be found here: http://www.extremetech.com/extreme/225707-ibms-resistive-computing-could-massively-accelerate-ai-and-get-us-closer-to-asimovs-positronic-brain

Powered by WPeMatico

Forget ‘Deep Dream,’ Google’s ‘Deep Stereo’ Can Recreate the Real World

Google’s artificial neural network has run rampant throughout the internet in the last few weeks, turning demure Twitter photos into surrealistic nightmares and taking the already-hellish Fear and Loathing in Las Vegas to a level only Hunter S. Thompson himself could have imagined.

But let us not forget: Google’s AI also serves practical means. Google engineers are using their layered artificial neural network, also called a deep network, to create unseen views from two or more images of a scene. They call it “Deep Stereo.” For example, if you have a photo from the left and right of a scene, the deep network will tell you what it looks like from anywhere the middle. Or if there are five photos of a room, the deep network can render unique views of what the room would look like from other angles, based on what it thinks should be there.

The network does this by figuring out the depth and color of each pixel of the images, and creates a 3D space using each input photo as a reference plane. Then, it works to fill in the gaps based on color and depth input from the original photos. At its current settings, the deep network can work on 96 planes of depth gathered from images.

Researchers initially trained the neural network to do this (much like they trained it to produce its own images) by running 100,000 sets of photos through the network and having it create unique views. The new images rendered had to be small, because according to the study, it takes 12 minutes to produce a tiny 512×512 pixel image, making the RAM required to process an entire image “prohibitively expensive.” The images used were “street scenes capture by a moving vehicle” which we can only assume is a Google Street View car, whose images as mentioned later on.

Researchers note that the two biggest drawbacks are the speed in which these images are processed, and that the network can only process 5 input images at a time—limiting resolution and accuracy.

Obvious application for this technique would be making Google’s Street View a more fluid experience, rather than having to jump from photo to photo taken by the car. However, if 512×512 pixel images are the current reasonable limit, and Google has pretty much imaged most of the driven world, we’re not expecting this feature to come any time soon.

The original article can be found here: http://www.popsci.com/putting-googles-famously-trippy-deep-network-work

Powered by WPeMatico