The privilege of being able to lean into the future and witness cutting-edge technology at the dawn of the AI revolution is just one of the perks of being a well-known blogger operating at the intersection of culture and technology.
I was recently honored to submit a random photo of my grandson, taken with my iPhone6 Plus, to an MIT Applied Artificial Intelligence project that used custom machine-learning algorithms to create models of the styles of various noted artists. They were then able to apply their impressive computing resources to generate ‘custom art’ from the aforementioned photograph – with stunning results…
OK, who am I kidding? These were generated on my phone in a couple minutes using a free app called Lucid, but you probably already knew that. Because your intelligence isn’t artificial.
The raw source photo is nice enough. (top of the post) It’s shot against an amazing high desert sky near Taos, NM, and let’s face it, the kid is beautiful. But it IS a phone camera, the subject is dark, face is in the shadows, etc. So to me, the results created by Lucid are pretty awesome. (And they are not the only player in this space. Not by a long shot.) While it was BS being spewed by me about MIT Applied Artificial Intelligence, these results DO in fact show the potential of training a neural net on one thing (artistic styles), applying an input (a cool picture of my grandson) and getting something new and unexpected as an output. Fun… and maybe even useful!
Did I mention that an app harnessed deep-learning and AI to do this? And that it was done on my phone? And that it was free?
These are early days, kids! It just gets more amazing (and exciting and scary and threatening and empowering) from here on out!
That is a seriously handsome grandson you got there. Probably his mom is a stone fox.
LikeLike
Yup. Just like her mom
LikeLike