From 2016 to 2019, I co-wrote a research paper with David Rothschild about prediction markets. We analyzed trading data from a few markets offered by the PredictIt website to learn about trader behavior and profitability.
I designed and implemented a new approach for applying style transfers to 360 pictures and video. Using contemporary and historical paintings and images from Google Street View, I created art that encourages viewers to appreciate what is beautiful or poignant about the modern world, and to see it with new eyes.
In 2008 I built a fully automated computer program that could make (real) money on its own by trading securities in a small market called the Iowa Electronic Markets. The securities were linked to 2008 Presidential election. In 2011 I published a research paper explaining the results and details about how the system worked.
Processing is a Java application popular with artists and creative technologists like myself. Processing allows users to build extensions to expand the application's features and to share those tools with the community. I built one such extension called Camera3D. My code enables users to employ various 3D algorithms with ordinary P3D processing sketches to provide viewers with an illusion of depth. The collection of techniques include anaglyphs (think red-cyan glasses), 360 Video, tools for creating content for 3D TVs, and split depth optical illusions.
I created another Processing library called ColorBlindness. This is a stripped-down fork of the Camera3D code that provides tools for simulating color blindness and performing daltonization. Included are some educational color blindness utilities to help users explore color disabilities.
I employed a Neural Style Transfer algorithm to improve the aesthetics of Google Street View images. Google has accumulated street-level image data for much of the world but often the photos look dull because of bad lighting or weather conditions. For this project I wrote Python code to download Google Street View images using their API and re-styled them using a style transfer algorithm implemented in Python and TensorFlow. Then I did this for sequences of images to produce beautiful animated videos of Google's image data.
I 3D printed a series of objects depicting a tesseract rotating in 4-dimensional space and animated them using Dragonframe. The animation and 3D prints are helpful for 3D creatures like ourselves to conceptualize higher dimensional objects.
I did a VFX project in After Effects for my animation class. I reverse-engineered the mechanics of a popular online video to add frames of a movie to the same movie, positioned in such a way that everything blends seamlessly. That description is poor but I don't know how else to describe the project so please have a look for yourself.
I built a website that allows you to explore Google's Quick, Draw! dataset. This was originally built for my Networked Media class but was later rebuilt on AWS using Amazon's S3, Lambda, Cognito, and DynamoDB services.
I designed and built a custom computer keyboard for a woman with cerebral palsy. Traditional keyboards are often difficult for disabled people to use. In this physical computing project, I built a keyboard using laser cut parts and an Arduino.
A data representation project using live or recorded mouse and keyboard activity. Characteristics such as mouse position and keystrokes are manifested as colored lines and shapes. The background of my computer is created in real-time in response to user input. When I turn on my computer the background is black, but it quickly evolves into a unique visualization depicting my interaction with the computer.
I have an odd fascination with connect the dots puzzles. I have designed a bunch of these over the years and have used them for the front cover of my holiday cards and the back of my personal cards. I think they are a great way to engage people in a creative manner using something that people don't usually pay much attention to.