Generating Watercolors with AI
We all have seen how artificial intelligence (AI) has been used with photography to do things like fix photographs in Adobe Photoshop, apply fun filters to photographs on Instagram, or transform a photograph to embody the style of a famous painting. The last example, style transfer, uses an AI model called a Generative Adversarial Network (GAN) and can also be used to generate entirely new images based on training it with a collection of images. This is what blew my mind this week.
I am part of a global organization at Microsoft, Commercial Software Engineering (CSE), where we help partners and customers with their most complex software engineering challenges by coding with them. The team is made up of software engineers, data scientists, and program managers like me who all love to hack and build stuff. We just completed a week-long hackathon combining virtual and in-person teams where individuals proposed ideas that interested them to work on and explore – in order to build a team to build something in less than four days.
Paul Butler, a software engineer, recent Computer Science graduate, and CSE team member, proposed an idea to use a Style GAN 2-ADA to generate artwork. I had been learning about GANs and my colleague Kevin Ashley just published a book on creating art with AI – and that really peaked my interest. I had never met Paul before, but we quickly found a common thread and interest – as we both went to college in Arizona: he’s an ASU Sun Devil and I’m a UA Wildcat – and he’s super smart. Paul knew how to run the GAN using Python but he didn’t have a training image set to work from – and that’s where I was able to help.
30 Years of Artwork
Ever since I started college studying architecture, I have been keeping visual journals of my artwork, taking a journal and compact supplies with me to document my journey when I travel for work or leisure. Having done this consistently for thirty years, I have filled more than 20 journals with drawings, watercolors, collage, pop-up craft, stickers, circuitry, and photographs. When the pandemic started and I had some time where I couldn’t travel, I scanned and categorized the pages using Adobe Photoshop Lightroom, adding dates, locations, and tags to the images identifying media and medium. Ending up with a collection of over 2,000 journal pages, I had tagged data set of imagery. I proposed to Paul that we use a subset of the pages, 315 of my watercolors, to train the GAN to make an AI that could create watercolors.
I had started watercoloring when I was 18 on a summer trip with Chuck Albanese, one of my architecture professors to sketch and watercolor in Italy and Greece. Watercoloring starts out hard as you learn to understand how to control the water and color, but like every skill you can get better at it with practice.
I had just finished my first year of architecture school where I learned from Kirby Lockard how to accurately draw freehand perspective, which is crucial in watercolor as a painting often start with a base drawing in pencil or pen. I loved watercoloring because it was portable and quick so I could take my creativity with me wherever I went and create something in as little as 15 minutes.
Journals as Memento Collections
Ever since that trip, for more than thirty years, I’ve been sketching and watercoloring and with the Grail Diary from Indiana Jones and the Last Crusade as inspiration, I started using journals to record my journey, thoughts, and ideas and to collect the ephemera that I gather along the way. What I found is that while I am sitting somewhere creating artwork to go on a page of my journal, my mind is recording everything around me, the people I meet, the conversations I have, what else I observe, and my feelings. The artwork turns into a mnemonic for the moment, a memento. My journal is a serial collection of personal mementos that I can openly share with others, knowing that I don’t have to write a personal narrative to record it. When I look at a page in my journal years later, I instantly recall the experience from years earlier.
Training the GAN
Paul started working on building the Python code to generate images and I started refining my watercolor collection, isolating the images on pages that had other content like line drawings, collage, and making a consistent collection of watercolors. Over the course of more than thirty years my style has evolved so there is a huge amount of variability in the collection, with the only common thread being that they were all watercolors and all created by me. Paul wrote the code so that a single numeric value from 0-999, a random number seed, would be the only input variable to generate each image. He ran the AI model for a few hours and we started seeing results come out of it and I was blown away!
Mementos in the Images
The AI found patterns and my techniques in the watercolor collection and started creating new unique abstract images that instantly triggered memories in me. I not only recognized elements in the images but those images triggered memories of the experiences that I had when creating them, combining these memories in a way that only happened for me in dreams. These images were mementos that spanned time and space.
The image above, #0780 reminds me of the time that I took my son Sam, then 12, to a nude figure drawing session at the local art store five years ago, where we both spent the evening sketching and painting. As you might imagine, it was a memorable evening, and seeing this image, I was brought right back there. As many of my watercolors over the years have been of monumental architecture, there is a heavy influence on the AI model for that kind of subject. In this generated artwork, I also see a hint of the architecture of Antonio Gaudi, one of my favorite architects whose work is the subject of a number of the paintings in the training model.
My Style in the Images
In the more than 1,000 generated images, I saw my style, composition, coloration, linework, and brushwork, but in very abstract forms, much more abstract than I’m comfortable with consciously doing in my work which today would be best characterized as Urban Sketching. Paul tells me that we can do much more to increase the variability, improve the quality, and refine the model. You will likely notice a recurring theme in many of the images of a monumental building on the right of the image; this is either an artifact of the AI model, or a hidden propensity for me to paint buildings on the right side of a watercolor – definitely an interesting direction for investigation. Paul is anxious to build a model against my whole corpus of artwork, which sounds like a very cool idea.
Putting the Artwork in a Gallery
I took more than 200 of the generated images and put them in a virtual art gallery using my Galeryst site. Generated artwork in a generated gallery seems very appropriate. I originally created Galeryst to share my journals with others as bound journals don’t typically exhibit well in physical galleries.
Sharing the Entire Collection
I’ve shared the entire collection of generated watercolor artwork using Adobe Photoshop Lightroom because I want your feedback on the images. You can also see a slideshow of the entire collection on the Lightroom site as well if you click on the … in the upper right of the page. Here is the feedback I would love from you:
- If you like an image, click on the ❤️ heart button in the lower left corner.
- If an image reminds you of something or somewhere, please click on the 💬 comment button in the lower left and leave a comment.
On Adobe Photoshop Lightroom Mobile, there’s a cool feature to “Choose Best Photos”. I ran the analyzer on the collection with a quality threshold of 16, and these are the top 16 that it picked:
Pretty cool, right?
Galeryst Beta Applications Open
Galeryst is a new site that builds 3D galleries from your Adobe Photoshop Lightroom albums. We are looking for Lightroom users who are interested in trying it out in a private beta test before the site launches publicly. If you are interested, go to https://galeryst.com to apply to the beta program.
Mars Perseverance Landing with MakeCode Arcade
I look back at one of the first video games that I remember having fun playing: it was Lunar Lander created by Atari. Not only was it a game in the arcade which cost a shiny quarter per play, but it also ran on the TRS-80 computers in my schools computer lab which were free for me to use. I liked that deal especially as I was learning how to use BASIC programming to make the pixels move on the screen of those computers. For me, and many kids of my generation, computer games, very basic computer games, were our draw to computers. I poured through computer magazines which had listings of the BASIC code for games that I typed in, line after line. Debugging was going through the code again line-by-line until I found each of my typos. I then started on my own ideas: using a for/next loop, I was able to make a spaceship fly across the screen just like I saw the Kirk’s Enterprise accelerate to warp speed. I was hooked.
That’s how I started coding- that’s why I started my path in software at the age of 11 with games like Lunar Lander.
When I was sent an AdaFruit PyBadge by a colleague in December to experiment with, I immediately thought back about the games like Lunar Lander that inspired me as a young boy to start coding.
The Adafruit PyBadge is a mini $34.95 computer that you can code with MakeCode Arcade, CirciutPython, or Arduino. You write the code on a computer and download it to the device which has a small color screen, buttons, lights, sensors, and a speaker, and various connectors to enable all sorts of other circuitry. I immediately thought of Lunar Lander but I also heard about the NASA Perseverance mission to Mars that was underway and thought I might try to create something similar for the Perseverance mission by the time it lands on Mars on 2/18/2021.
Once I started researching the mission, I realized that it was way more complex than the lunar lander. My challenge was to make a game around the landing of this rover on Mars. Here are the steps:
- Capsule enters Mars atmosphere and decelerates with heat shield
- Capsule slows down with parachute
- Heat shield ejected
- Lander drops out of capsule and starts rockets
- Lander gets close to surface and lowers rover to ground with cables
- Lander flies off
- Rover starts it mission to explore Mars looking for signs of ancient life.
My mission was to create a game around that so I started building it with MakeCode Arcade, using the drag and drop interface to make something fun. MakeCode is a web-based programming environment for kids that can be used to program MineCraft, hardware devices like the BBC micro:bit, Lego Mindstorms and games. My first experience with MakeCode was to animate the BBC micro:bit on the bag I use for my journaling/art supplies, which I’ve shared on Thingiverse. MakeCode Arcade is a version of MakeCode that makes it easy to build games with sprites, animations, and interactivity. The beauty of MakeCode is that you can switch between the graphical block-based programming and the code view to see that they do the exact same thing – a great way to “graduate” to text-based coding.
I was able to get pretty far but I ran out of time as the actual Perseverance rover will land on Mars in two days. I’ve shared the source code so anyone could try it out and use it as a starting point for their own experimentation. The amazing part about the Perseverance mission is that the whole landing sequence will have to be done via computers without direct human controls, most likely with artificial intelligence, since the time it takes for radio signals to travel between Mars and the Earth is between 4.3 and 21 minutes.
The gameplay is this: once the lander is detached from the capsule, use the down arrow to slow the descent. Once close to the surface, press the A button to release the lander. The lander can then explore the surface of Mars by pressing the left and right button, pressing A again to send a pulse looking for water underground. That’s as far as I got with the time that I had. Anyone is free to tinker, modify, and adapt it, just please share with me what you do with it. I’d love to see where this goes.
I think that the possibilities for kids today to learn coding and build fun games that can be loaded onto a tiny computer is so cool. The block-based programming makes is so easy to learn the basic concepts of coding and creating fun games. I printed a basic case for the PyBadge with my 3D printer and here’s my Perseverance game playing on it.
What can your young coder create today?
Animating your Web Meeting Experience
Like many of you in the pandemic, many of the hours of my days are spent in web meetings. For me it’s a combination of Teams, Zoom, WebEx, and Google Hangouts. Turning cameras on really helps to connect to everyone, even though it’s through a small array of dancing pixels. I’ve had fun using tools like Adobe Character Animator and OBS Studio with a reMarkable tablet to turn my camera feed into something a bit more interesting.
Using Adobe Illustrator, I’ve made a custom frame that I’ve started using in OBS Studio to express myself, similar to how I might use my attire or decorate my workspace in an office setting. I apply a Chroma Key filter in OBS studio
I then composite a number of text and image elements onto that have meaning to me – including a slideshow of my artwork. It’s been a great conversation starter before the start of a meeting. But I wanted to do a little bit more….
I’ve always thought that adding a little bit of animation might be fun with the free Microsoft Photos app. If you are involved in selling or talking about real world items or even virtual ones, there is a great opportunity to share animated 3D models of those items in your camera feed as well. Here’s how I did it:
- I opened my frame .png image with the Microsoft Photos app and selected Edit & Create…Create a video with Music.
- I named the video Animated Frame and pressed OK.
- I tapped on the 3.0 timespan on the first frame in the storyboard and changed the timespan to 10 seconds.
- I tapped on the 3D effects button to open the 3D Effects pane
- In the 3D library tab, Sci-Fi & Fantasy group, I selected the Landing UFO and it was imported and showed up on my frame.
- I then dragged and resized the UFO to the upper corner of the frame.
- In the pane, I changed the quick animation to Hover and reduced the volume to 0. I also dragged the timeline span to cover the whole duration of the video.
- I wanted to add one more effect, so I clicked on the Effects tab and added Plasma sparks, then moving it to the upper right and reducing the volume to 0. I also changed the timespan to cover the whole video duration.
- Now that I was done with that I clicked on the Done button and then the Finish video button, selecting the High 1080p video quality and pressing Export.
- I saved the file to my computer as Video Frame.mp4, resulting in this video
- Next in OBS studio, I added a new Media Source called Video Frame, selecting my Video Frame.mp4 as the source file and checking the Loop button.
- I added a Chroma Key filter to the Video Frame media source
And I have an animated saucer and plasma sparking in my camera feed once I press the Start Virtual Camera button in OBS Studio. The camera will then show up in my list of available cameras as OBS Virtual Camera. I have the frame image right behind the video frame image so I can turn off the animations if they get too distracting.
The way that I look at it, If all people see of me is a rectangle of pixels the size of a credit card, I want all of those pixels to count. Please share what you create to make your web meetings more fun.
Using a reMarkable Tablet in Web Meetings
Scott Hanselman posted a video earlier this month that gave me an idea. He showed how he used OBS studio and Microsoft Whiteboard to do a transparent glass whiteboard in Microsoft Teams and I saw in it an interesting way to use my reMarkable tablet to do something similar.
Working from home, like many of you my primary device for work is a desktop computer which does not have the drawable surface and pen of a Microsoft Surface device. Recently, I got the new reMarkable 2 paper-like tablet and I really like it. The device works with a dedicate desktop app that, in addition to helping synchronize notebooks, it has a Live View capability where the display on the computer is in sync with the tablet and update every time the page on the tablet drawn on. I was able to take the live output from the reMarkable app on my PC as an input source in OBS Studio and replicate the transparent glass whiteboard effect that I could use during Microsoft Teams meetings. If you aren’t familiar with it, OBS Studio is a free, open-source application for Windows, macOS, and Linux that you can use to stream and record from your computer, mixing video, desktop windows, audio sources, and graphics. Here is how I did it:
- On my reMarkable 2 tablet, I added a new page to a notebook, using the blank page template and set the orientation to landscape.
- On my PC, I started the reMarkable app
- On my reMarkable tablet, I turned the LiveView (Beta) option on in the share menu
- Once I did, that I was prompted on the app on my PC to accept the LiveView request. At that point, the app’s screen mirrored my tablet.
- In OBS Studio, I added a Video Capture Device for my webcam and stretched it to the size of the screen output.
- I then added a Window Capture source, selecting the reMarkable app as the Window.
- Now the whiteboard is positioned over the video capture device. Before I resize it, I need to crop the edges.
- I drag the edges of the new Window Capture element with the Alt key pressed to crop out the frame and chrome around the whiteboard.
- Now I resize the whiteboard so it covers my Video Capture Device.
- The last thing I do is add filters to the Window Capture to make it all work. Select the Window Capture source, right click and select Filters…
- Add a Color Correction filter to make the white background green.
- Add a Chroma Key filter to remove the background. You may need to adjust the Similarity value if you use gray pens on the reMarkable.
- Add another Color Correction filter to make the black text white.
And then you can stream, record, or use OBS as a Virtual Camera in your online meetings. The whiteboard drawing from your tablet is saved and you can easily export it to a PDF or image and it is in the video as well.
Give it a try today and add more drawing to your online meetings!
Some users have pointed out a limitation to the LiveView (Beta) feature in the reMarkable app where erasing on the tablet does not immediately erase on the LiveView. A quick fix to refresh the LiveView is to tap the Full-screen button in the lower right corner of the app to make the app go full screen, and then tap it again to go back to the original size. This triggers a refresh of the LiveView with the erased ink. I reported this bug to the reMarkable team.
The past few years I have been doing coaching and mentoring and have found it greatly rewarding helping others. Most people were finding me through my work at Microsoft and LinkedIn but I thought it was time that I launched a website focusing on it. My specific specialty is helping people combine their creative passions with their love of technology. Take a look today at TechCreativeCoaching.com.
Take a look at what my clients have to say, the reading list, and the video list. I’d love your feedback, and if you want to book an appointment to discuss your career, please fill in the form on the home page and I’ll get back to you.
Virtual Flight Sketching
The New Microsoft Flight Simulator has opened a new location for me to take my sketching: anywhere in the world. One of the first games that I played on my first computer, an IBM PCjr was Microsoft Flight Simulator in the 1980s and that started my journey into computation with a fascination of a three-dimensional environment represented on a flat screen. The technology has advanced amazingly since then and so have my drawing skills.
The imagery and geometry that is now in Microsoft Flight Simulator is very accurate, lifelike, and for me, an urban sketcher, good enough to sketch. The application give me the foreground, an airplane cockpit, the midground, buildings and geology, and the background of scenic vistas with accurate weather rendering.
I pick a location, an airplane, and fly to get just the right point of view, then I press [Pause]. I then start sketching in my journal from Iona Handcrafted Books, Adobe Fresco, or even my Sketch 360 app. Since these sketches aren’t from real life, I shouldn’t call them Urban Sketches, so I’ve decided to call the Virtual Flight Sketches with the hashtag #VirtualFlightSketch.
I’ve always wanted to see the pyramids of Egypt.
I created my latest Virtual Flight Sketch with my Sketch 360 app and exporting as an animation video that you can interact with.
For this sketch, I had the Flight Simulator in the left screen and Sketch 360 running on the Wacom One display tablet for the drawing canvas and on the right display where it showed the 360 view.
The funny thing about pausing in Flight Simulator is that the plane stops in mid-flight but the clock does not stop. This means that if I’m doing a sketch at sunset, the lighting is going to change during the time of my sketch. It adds a realistic aspect to the experience. I know that I could easily take a screenshot and work from that, but I choose not not.
Where should I fly for my next #VirtualFlightSketch ?
During the shelter-in-place I have been compiling pages from my journals into a first book and wanted to share that with my community to solicit some feedback.
The pages are in chronological order spanning 10 journals over 29 years.