Posts about python (old posts, page 3)

New py5 Release: 0.3a5

What's new:

  • Upgrade to the Processing 4.0 alpha 3 release

  • A new way of using py5: The render-helper-tools. This was a great idea suggested by Allison Parrish.

  • Major cleanup of the IPython Magics to make the magic names and their parameters more consistent. Notably, the units for all time related parameters is now in seconds. Any code that used the magics will need to be updated. Refer to the jupyter-notebooks documentation or the magic docstrings for more information.

  • An exciting new feature to simplify the creation of adhoc Java extensions to boost Sketch performance. I haven't had a chance to document it yet, but trust me, it is pretty neat.

Also:

  • Lots of little bug fixes

  • Code improvements in the service of producing better documentation

  • Changes to the setup.py package requirements

What's Ahead:

  • Fixing OSX-specific issues

  • Documentation, documentation, and documentation

Happy Holidays!

Happy Holidays!

This is the 3D animation I made for my holiday cards, using the open source library I've been building, py5. You'll need ChromaDepth glasses to see the 3D effect properly.

Music: This is Christmas by Scott Holmes Music.

The snowflakes are from the old and widely used WWFlakes font by WindWalker64.

The actual source code for this animation is available on github as a gist. This is a good example of how one can easily augment py5 with a Java Processing library.

This animation took some time to create because I first had to figure out how to implement ChromaDepth in Java. It's also the first time I did something notable in Processing using shaders, and that took some effort to learn. Shaders are a topic I've been wanting to explore for a long time and am happy I got the opportunity to do so while creating this. I'm also happy that py5 performed well during the development process. I didn't have to fix any bugs. Hooray!

Also have a look at the animations for 2015, 2016, and 2018. Those animations all require red-cyan anaglyph 3D Glasses.

py5 blog

This is my first py5 related post. The purpose of this series of posts is to document my progress developing the library. This includes the technical aspects of the library, such as the release schedule, and non-technical aspects such as documentation and community growth.

In addition, as the py5 community grows, I will post links to notable py5 projects. I'd like to highlight everything from student work to the work of professional artists to commercial applications. I do believe in the usefulness of py5 and I'm so excited for the creative community to start using it.

Data Assembly Complete

Milestone #2: Data Assembly

I'm comfortable saying I've completed this milestone. I've finished all the major features and have a nice interface for interacting with the downloaded data.

There are a few minor issues but none require a lot of time or brainpower to implement. Mostly nice-to-have enhancements like better error checking in my code that I feel compelled to do but aren't critical right now. I'll complete them as time allows.

The important thing is that I can now begin downloading the data I need without fear that I will need to download everything a second time later.

I made an interactive tool in matplotlib to visualize a spatial map of the locations I've downloaded data for. It looks like this:

Read more…

Data Assembly

Milestone #2: Data Assembly

The second step of this project is to access the Google Street View data and organize it in a suitable format. In my project plan my target was to reach this goal by February 21st (last Wednesday). Although I have accomplished a lot, I have not achieved all of the things I wanted to achieve for this milestone. I expect to hit it by next week at the latest.

Here's what I have achieved.

First, I can download all of the relevant data from Google. This includes all of the panorama image data and meta data. I can also access the panorama ids for the neighboring locations. All of the metadata is stored in a database.

Read more…

Data Investigation

Milestone #1: Data Investigation

My first step is to investigate my data options for this project. As discussed in my plan, I am considering Google Streetview data and LiDAR data. The Streetview data is my first choice but I realized that that data might be different from what I expect or have weird complications that make it difficult or impossible to do what I have in mind. I wanted to consider alternatives, and there's a lot that interests me about LiDAR data. Of course that data might be impossible to work with too. In any case, I needed to find out these things right away while it is still easy to change course on this project.

The summary of Google Streetview data is that it is easy to work with and close to what I expected. They provide a convenient API that is properly documented. Unfortunately, the depth data discussed in this blog post does not come from the API, and that information is compressed in a format I have not yet parsed. The author of that post does provide C++ code for doing so; I am optimistic that I will be able to translate that to Python and/or integrate their process into my code.

LiDAR data is also well documented but extremely complex. I've worked with complex data before and am confident I can manage this if I put in the time. My objection is that taking the project in that direction would take a good portion of the class. I would have less time to learn about the topics I want to be learning about.

Additionally, I feel the challenges I would face with the Google Streetview data is resonating with me in a way that the LiDAR data challenges are not.

My conclusion is that I will use the Google Streetview data for this project. Sometime after the semester is over I might spend more time with the LiDAR data and get some experience working with it. It would be a great choice for a future project.

Read more…

Tuning Hyperparameters

Our last Learning Machines assignment is to calibrate the hyperparameters for a Multilayer Perceptron. Patrick gave us a working model using the MNIST database of handwritten digits. The model uses a Restricted Boltzmann Machine to reduce the dimensionality of the data and then a Multilayer Perceptron to classify the digits.

I was able to achieve an out-of-sample accuracy of almost 96%. This is in line with the results of other researchers.

Read more…

Multi-Layer Perceptron Study

Our next assignment is to use a Multi-Layer Perceptron to study a dataset.

The dataset I selected is the commonly studied Poker Hand data. Each record contains data for 5 playing cards and a poker hand classification, such as full house or straight.

This dataset proved to be difficult to work with. It is an example of an imbalanced dataset in that the more common poker hands like two-of-a-kind are heavily represented and the less common hands like straight and flush are not.

I found that the Perceptron was able to correctly classify some poker hands very well while performing terribly for others. I suspect a very different training methodology is required to properly train a Perceptron with this dataset.

Read more…

Modified Pulse Sensing Algorithm

Our Physical Computing final project depends on a Pulse Sensor to detect a user's heartbeat. The people at World Famous Electronics created an Arduino library for their customers to use with their sensor. The library adds a lot of value because it provides users with a well researched algorithm for using the sensor to properly detect a heartbeat. Pulse Sensor users don't have to re-invent the wheel and code their own algorithms. Writing your own algorithm to do this is difficult, and the one provided by the company is better than the one that I came up with for our midterm.

Still, the provided algorithm isn't perfect. For some people it seems to miss some heartbeats and add extra heartbeats. A fellow ITP student, Ellen, showed me that it would have odd spikes in the beats-per-minute (BPM) value. It wasn't clear why this was happening. Since I previously had been analyzing the sensor's data in Python, I came up with a plan to figure out why the Arduino code was doing this and to figure out if there was anything I could do about it. After studying the data and making some plots, I was able to make some improvements the algorithm. It still isn't perfect but my changes address many of the weaknesses of the algorithm.

The original Pulse Sensor Arduino code is available online on GitHub. I am sharing this code with my fellow students who are also using the same sensor. After our projects are complete I will submit my modified code to GitHub as a pull request to share with the rest of the community.

Read more…