Return to “Everything & Anything”

Post

Movidius: Neural Network on a USB

#1
So you say you'd like a packaged neural network product for doing things like machine vision?

And you'd like it to live on a USB drive?

And use the Caffe framework for neural nets?

And deliver 100 Gigaflops for a 1W draw?

And be chainable?

Image

Well, now that's something you can get from Intel. They bought Movidius, which makes this Neural Compute Stick that seems like something out of a Deus Ex game, and they're selling it now.

For $79.

I really have no idea what anyone would use these for, but dang, this sounds sexy.
Post

Re: Movidius: Neural Network on a USB

#5
Talvieno wrote:
Mon Jul 24, 2017 12:03 pm
This sounds awesome. Normally I'm not very intrigued by neural networks in general, but this seems such a simple solution to if you did want one in a hurry.
Can you tell me what you can use a neural network on a USB stick for? As my understanding of neural networks goes, they need A LOT of computation power and storage to be effective. And they need space to store their own database somewhere. So...neural network on a USB-Stick. What?
Automation engineer, lateral thinker, soldier, addicted to music, books and gaming.
Nothing to see here
Flatfingers wrote: 23.01.2017: "Show me the smoldering corpse of Perfectionist Josh"
Post

Re: Movidius: Neural Network on a USB

#6
Depends on how complex you want to make it. A single-layer neural network only takes up a handful of MB. Easy to store on a flash drive. The program would be stored on the USB too, and the computation power is supplied by your computer. Of course, you could easily make one of these things by yourself... but I suppose this is the "library" equivalent. Plug it in and it works, so you don't have to do any heavy lifting.
Image
Have a question? Send me a PM! || People talking in IRC over the past two hours: Image
Post

Re: Movidius: Neural Network on a USB

#7
JanB1 wrote:
Mon Sep 11, 2017 12:28 am
Can you tell me what you can use a neural network on a USB stick for? As my understanding of neural networks goes, they need A LOT of computation power and storage to be effective. And they need space to store their own database somewhere. So...neural network on a USB-Stick. What?
NN data needs are directly exponentially proportional to the size of the network (something in the order of n²)
For which a couple of gig you can fit trivially on a stick are enough for simpler stuff (like everything thats not the speech or picture recognition networks of google)
And a modern gpu or asic can do a lot with 2.5 watts an USB port delivers. (Again, most of the things that arent speech or image processing)

So i can imagine that some pretty capable nn can be fully contained within such a stick.
Post

Re: Movidius: Neural Network on a USB

#8
Gotta be honest, the exact way neural networks work (and I don't mean how they learn and then use the learned stuff, but how they ACTUALLY work) and how evolving algorithms work is a total mystery to me. Would be glad if someone could enlighten me. Cuz it bugs me, that I don't know this. :D
Automation engineer, lateral thinker, soldier, addicted to music, books and gaming.
Nothing to see here
Flatfingers wrote: 23.01.2017: "Show me the smoldering corpse of Perfectionist Josh"
Post

Re: Movidius: Neural Network on a USB

#9
JanB1 wrote:
Mon Sep 11, 2017 11:39 pm
Gotta be honest, the exact way neural networks work (and I don't mean how they learn and then use the learned stuff, but how they ACTUALLY work) and how evolving algorithms work is a total mystery to me. Would be glad if someone could enlighten me. Cuz it bugs me, that I don't know this. :D
ANNs [artificial neural networks] come in about a hundred kinds, but the main kinds about which I know are feedforward, recurrent, and convolutional.

Feedforward networks are one of, if not the, simplest kinds of ANN, and serve as a fine tool to grasp the basics of the rest.
A feedforward network consists of vertices (or neurons) and edges (or synapses). The neurons are arranged into layers. The first of these layers is the input layer, which neurons' values are driven by stimuli. The last is the output layer, which neurons' values are taken as outputs to the problem being solved. All other layers are known as hidden layers, called such because a black-box interpretation of an ANN does not account for them. Neurons in hidden layers and the output layers are driven by the neurons in the previous layer through the synapses. The function which relates the neurons of the previous layer and they synaptic weights to the neuron in question is based on the sigmoid of the sum of the values of the previous layer, each weighted by the relevant synapse's weight. The sigmoid function is an approximately s-shaped function which serves to normalize the sum between either 0 and 1 or -1 and 1, depending on the application. This curve has the interesting property of having a higher slope nearer the origin, which increases sensitivity near middling values and decreases it at extremes. The network learns by changing the synaptic weights. The network as a whole forms a "combinatorial" system; that is, the output is mapped to the input without respect to time or prior values. Because of this, feedforward networks have no capacity for memory except in the genome itself. A feedforward network could be compared to the proportional component of a PID controller, entirely lacking both memory and understanding of change.

Recurrent neural networks are a simple evolution of feedforward networks. In an RNN, one of the input synapses for each neuron comes from itself in the previous simulation frame. This introduces a temporal component, which could develop into memory or change-detection. RNNs are no longer combinatorial, and able to simulate the integral and derivative components of a PID controller as well as the proportional component.

Convolutional neural networks iteratively examine a set of data, usually arranged like a multidimensional array. This makes them particularly well-suited to, for example, image processing, and thus they have been instrumental in much of the recent advantages in computer vision and similar fields. They are arranged largely like the simple feedforward or recurrent networks, save for a steadily decreasing layer size as the network goes deeper to "distill" the data into usable outputs. The inputs of the network are moved across the array (or image), and gain information from a rolling patch of data.

I'm a little unclear on CNNs myself, so forgive me if that section is… lacking.

Online Now

Users browsing this forum: Detritus and 1 guest

cron