So you say you'd like a packaged neural network product for doing things like machine vision?
And you'd like it to live on a USB drive?
And use the Caffe framework for neural nets?
And deliver 100 Gigaflops for a 1W draw?
And be chainable?
Well, now that's something you can get from Intel. They bought Movidius, which makes this Neural Compute Stick that seems like something out of a Deus Ex game, and they're selling it now.
For $79.
I really have no idea what anyone would use these for, but dang, this sounds sexy.
Post
Sun Jul 23, 2017 1:20 am
#2
Re: Movidius: Neural Network on a USB
While I doubt if I will ever find a need for such a device I can at least feel some affinity with the image you posted, Flat. That USB hub is remarkably similar to one that I own.
Post
Sun Jul 23, 2017 7:28 am
#3
Re: Movidius: Neural Network on a USB
Hmm, now lets just get this to learn to play LT....
Challenging your assumptions is good for your health, good for your business, and good for your future. Stay skeptical but never undervalue the importance of a new and unfamiliar perspective.
Imagination Fertilizer
Beauty may not save the world, but it's the only thing that can
Imagination Fertilizer
Beauty may not save the world, but it's the only thing that can
Post
Mon Jul 24, 2017 12:03 pm
#4
Re: Movidius: Neural Network on a USB
This sounds awesome. Normally I'm not very intrigued by neural networks in general, but this seems such a simple solution to if you did want one in a hurry.
Have a question? Send me a PM! || I have a Patreon page up for REKT now! || People talking in IRC over the past two hours:
Post
Mon Sep 11, 2017 12:28 am
#5
Re: Movidius: Neural Network on a USB
Can you tell me what you can use a neural network on a USB stick for? As my understanding of neural networks goes, they need A LOT of computation power and storage to be effective. And they need space to store their own database somewhere. So...neural network on a USB-Stick. What?
Automation engineer, lateral thinker, soldier, addicted to music, books and gaming.
Nothing to see here
Nothing to see here
Flatfingers wrote: 23.01.2017: "Show me the smoldering corpse of Perfectionist Josh"
Post
Mon Sep 11, 2017 6:58 am
#6
Re: Movidius: Neural Network on a USB
Depends on how complex you want to make it. A single-layer neural network only takes up a handful of MB. Easy to store on a flash drive. The program would be stored on the USB too, and the computation power is supplied by your computer. Of course, you could easily make one of these things by yourself... but I suppose this is the "library" equivalent. Plug it in and it works, so you don't have to do any heavy lifting.
Have a question? Send me a PM! || I have a Patreon page up for REKT now! || People talking in IRC over the past two hours:
Post
Mon Sep 11, 2017 8:57 am
#7
For which a couple of gig you can fit trivially on a stick are enough for simpler stuff (like everything thats not the speech or picture recognition networks of google)
And a modern gpu or asic can do a lot with 2.5 watts an USB port delivers. (Again, most of the things that arent speech or image processing)
So i can imagine that some pretty capable nn can be fully contained within such a stick.
Re: Movidius: Neural Network on a USB
NN data needs are directly exponentially proportional to the size of the network (something in the order of n²)JanB1 wrote: ↑Mon Sep 11, 2017 12:28 amCan you tell me what you can use a neural network on a USB stick for? As my understanding of neural networks goes, they need A LOT of computation power and storage to be effective. And they need space to store their own database somewhere. So...neural network on a USB-Stick. What?
For which a couple of gig you can fit trivially on a stick are enough for simpler stuff (like everything thats not the speech or picture recognition networks of google)
And a modern gpu or asic can do a lot with 2.5 watts an USB port delivers. (Again, most of the things that arent speech or image processing)
So i can imagine that some pretty capable nn can be fully contained within such a stick.
Post
Mon Sep 11, 2017 11:39 pm
#8
Re: Movidius: Neural Network on a USB
Gotta be honest, the exact way neural networks work (and I don't mean how they learn and then use the learned stuff, but how they ACTUALLY work) and how evolving algorithms work is a total mystery to me. Would be glad if someone could enlighten me. Cuz it bugs me, that I don't know this.
Automation engineer, lateral thinker, soldier, addicted to music, books and gaming.
Nothing to see here
Nothing to see here
Flatfingers wrote: 23.01.2017: "Show me the smoldering corpse of Perfectionist Josh"
Post
Tue Sep 12, 2017 1:24 am
#9
Feedforward networks are one of, if not the, simplest kinds of ANN, and serve as a fine tool to grasp the basics of the rest.
A feedforward network consists of vertices (or neurons) and edges (or synapses). The neurons are arranged into layers. The first of these layers is the input layer, which neurons' values are driven by stimuli. The last is the output layer, which neurons' values are taken as outputs to the problem being solved. All other layers are known as hidden layers, called such because a black-box interpretation of an ANN does not account for them. Neurons in hidden layers and the output layers are driven by the neurons in the previous layer through the synapses. The function which relates the neurons of the previous layer and they synaptic weights to the neuron in question is based on the sigmoid of the sum of the values of the previous layer, each weighted by the relevant synapse's weight. The sigmoid function is an approximately s-shaped function which serves to normalize the sum between either 0 and 1 or -1 and 1, depending on the application. This curve has the interesting property of having a higher slope nearer the origin, which increases sensitivity near middling values and decreases it at extremes. The network learns by changing the synaptic weights. The network as a whole forms a "combinatorial" system; that is, the output is mapped to the input without respect to time or prior values. Because of this, feedforward networks have no capacity for memory except in the genome itself. A feedforward network could be compared to the proportional component of a PID controller, entirely lacking both memory and understanding of change.
Recurrent neural networks are a simple evolution of feedforward networks. In an RNN, one of the input synapses for each neuron comes from itself in the previous simulation frame. This introduces a temporal component, which could develop into memory or change-detection. RNNs are no longer combinatorial, and able to simulate the integral and derivative components of a PID controller as well as the proportional component.
Convolutional neural networks iteratively examine a set of data, usually arranged like a multidimensional array. This makes them particularly well-suited to, for example, image processing, and thus they have been instrumental in much of the recent advantages in computer vision and similar fields. They are arranged largely like the simple feedforward or recurrent networks, save for a steadily decreasing layer size as the network goes deeper to "distill" the data into usable outputs. The inputs of the network are moved across the array (or image), and gain information from a rolling patch of data.
I'm a little unclear on CNNs myself, so forgive me if that section is… lacking.
Re: Movidius: Neural Network on a USB
ANNs [artificial neural networks] come in about a hundred kinds, but the main kinds about which I know are feedforward, recurrent, and convolutional.JanB1 wrote: ↑Mon Sep 11, 2017 11:39 pmGotta be honest, the exact way neural networks work (and I don't mean how they learn and then use the learned stuff, but how they ACTUALLY work) and how evolving algorithms work is a total mystery to me. Would be glad if someone could enlighten me. Cuz it bugs me, that I don't know this.
Feedforward networks are one of, if not the, simplest kinds of ANN, and serve as a fine tool to grasp the basics of the rest.
A feedforward network consists of vertices (or neurons) and edges (or synapses). The neurons are arranged into layers. The first of these layers is the input layer, which neurons' values are driven by stimuli. The last is the output layer, which neurons' values are taken as outputs to the problem being solved. All other layers are known as hidden layers, called such because a black-box interpretation of an ANN does not account for them. Neurons in hidden layers and the output layers are driven by the neurons in the previous layer through the synapses. The function which relates the neurons of the previous layer and they synaptic weights to the neuron in question is based on the sigmoid of the sum of the values of the previous layer, each weighted by the relevant synapse's weight. The sigmoid function is an approximately s-shaped function which serves to normalize the sum between either 0 and 1 or -1 and 1, depending on the application. This curve has the interesting property of having a higher slope nearer the origin, which increases sensitivity near middling values and decreases it at extremes. The network learns by changing the synaptic weights. The network as a whole forms a "combinatorial" system; that is, the output is mapped to the input without respect to time or prior values. Because of this, feedforward networks have no capacity for memory except in the genome itself. A feedforward network could be compared to the proportional component of a PID controller, entirely lacking both memory and understanding of change.
Recurrent neural networks are a simple evolution of feedforward networks. In an RNN, one of the input synapses for each neuron comes from itself in the previous simulation frame. This introduces a temporal component, which could develop into memory or change-detection. RNNs are no longer combinatorial, and able to simulate the integral and derivative components of a PID controller as well as the proportional component.
Convolutional neural networks iteratively examine a set of data, usually arranged like a multidimensional array. This makes them particularly well-suited to, for example, image processing, and thus they have been instrumental in much of the recent advantages in computer vision and similar fields. They are arranged largely like the simple feedforward or recurrent networks, save for a steadily decreasing layer size as the network goes deeper to "distill" the data into usable outputs. The inputs of the network are moved across the array (or image), and gain information from a rolling patch of data.
I'm a little unclear on CNNs myself, so forgive me if that section is… lacking.
Post
Thu Sep 14, 2017 2:56 am
#10
Re: Movidius: Neural Network on a USB
Thank you. That already helped a lot.
I think I now got a little firmer grasp on the topic of neural networks.
I think I now got a little firmer grasp on the topic of neural networks.
Automation engineer, lateral thinker, soldier, addicted to music, books and gaming.
Nothing to see here
Nothing to see here
Flatfingers wrote: 23.01.2017: "Show me the smoldering corpse of Perfectionist Josh"