I wonder, when it comes to recognising shapes ... aren't neural networks pretty good at that?
You could feed your analyzed stations and doodles into something like hyperGAN. It spits out images of Stations shapes that it thinks are pretty. You evaluate them, take the good ones and add them to the baseline. When it is at a point that you would say "yes ... now 80% of them look like stations".
You have your station algorithm with your parameters. You take a minimizer that iterates through the possibility space of your parameter set and uses the neural network as cost function to evaluate the outcome of each parameter set. This way you could narrow down the local or global minima minima for each new iteration of the ship algorithm over night.
I just realized hyperGAN does not work this way, does it? It creates images, it does not recognise them? Well then .... there sure is some free code for learned image recognition out there. ^^
(hypergan could work by mapping the parameters of good looking stations on pixels of an image (with brightness representing the parameter value) and than using hypergan to spit out new parameter sets)
Obviously I have no idea what I am talking about XD
It's way to late here. I should sleep. Good night. ^^
Ah, oh, wait!
Lindsey, love your devlog! ( like every LT devlog ... don't feel too special
But yeah, what Victor said. Waiting for something awesome is a trap. ^^