Drop Observation Camera

This is my 3rd post in a series on building a DIY Drop-on-Demand Inkjet Platform. In this post, I am building a camera and lighting rig for capturing moments in the drop formation and jetting sequence.

Animation composed from different drop jettings, captured at with increasing strobe delay.


  • The nozzle orifice of a piezo inkjet may be 10s to 100s of microns. While drop diameter will be larger than the orifice, to capture as much information as possible, the lowerbound for our zoom should enable us to capture 10s of microns.
  • The full jetting sequence for a drop may take 10s of microseconds, so the rig needs to capture moments within that sequence, without significant motion blur.
  • Cost and accessibility: Components should be generally available and accessible to hobbyists. Costs should fit within our overall budget of $1000 for the whole platform.
  • Ideally, the camera and lighting setup should be small and simple enough to operate in the environment where the piezo inkjet is used (eg. mount on the gantry of a 3d printer or microfluid dispenser).

General approach: Long exposure w/ fast strobe

The first thing we need to determine is how to capture an event that takes place within a few microseconds. While I’m sure a high-speed camera could tackle this task, it would break the budget. Instead, I’m opting for a high-speed flash. Imagine you want to capture the moment a balloon pops. A common technique is:

  • Set up a camera in a dark room.
  • Configure for a long exposure.
  • Connect the flash to a sound trigger.
  • Start exposing.
  • Pop the balloon.

The image sensor captures nothing while the room is dark. When the balloon pops and the flash triggers, the scene is lit for a brief moment, and just that moment is captured by the sensor.

Balloon pop captured with high speed flash.

This is the inspiration for the technique I’m going to use. I’ll need the following:

  • A webcam with configurable exposure (a lot of webcams have auto-exposure that can’t be overridden).
  • A zoom lens capable of capturing drop formations.
  • A light source to act as the flash (or strobe).
  • A trigger to manage the duration and delay of the strobe, relative to the waveform driving the piezo.
  • A “lightbox” to keep out external light and control how light from the strobe interacts with the subject.

Camera Selection

I chose the Korukesu C1 USB Camera, because I’ve used it for a number of computer vision projects. It has a CS mount to support different lenses. It is highly configurable in Linux using v4l2. It’s compact and quite inexpensive (around $100).

Lens #1: CS Mount Microscope Lens

I experimented with a few zoom lenses, but quickly discovered that I will need a microscope lens. I fist tried this style of CS mount microscope lens:

A gigantic CS mount microscope lens.

This lens could capture images in the zoom range I want, but … I hated it: The lens itself is huge, about 12″ long (dwarfs the tiny Korukesu C1). Worse, the minimum object distance was somewhere around 5 or 6 inches, so I had to position it far away from the piezo and lightbox. I also found that the zoom and focus controls are difficult — hard to dial in. With this unwieldy thing on my desk, I couldn’t bump it or I’d have to spend minutes refocusing.

Lens #2: Hacking up a Cheap USB Children’s Microscope

While researching CS-mount microscope lenses, I kept bumping into these cheap USB microscopes.

A cheap USB microscope.

Some of these went as low as $15. $20 with slightly dubious 1000X zoom on Amazon Prime. Dubious because the advertised magnification includes display of a 1024X768 pixel image on a 21 inch monitor. But for $20, it’s worth a shot.

The model I purchased is branded “Jiusion”, but the same model appears to be available with many different names. When I unboxed the microscope and tried it out, I was kind of impressed, except for a couple of issues:

1) Any small amount of stress on the base of the cable would cause dropped frames or connection loss. Super annoying. Maybe fixable if I add stress relief at the base.

2) The camera was only capable of capturing 1024X768 images.

3) Exposure was controlled automatically and could not be overridden (at least with device drivers I had access to). Possible deal breaker.

The lens in this microscope can’t be anything fancy, but when it works, the images are promising. What if I break it open, gut the lens, and make a mount for my Korukesu C1 camera? I found this handy YouTube video that explained how to disassemble the microscope without breaking the good parts.

The lens was mounted on a pair of guide rods, inside a threaded tube. When you rotate the tube, the lens fixture slides closer or farther from the image sensor, on the guide rods. This is how zoom is controlled on the microscope. Neat.

On the microscope, the guide rods are soldered to the main PCB (where the image sensor is) at the base end. I made a 3D-printable “cap” for the tip end to secure the rods instead. I made new rods from a clothes hanger and pounded them into place with a hammer.

Camera and lens with zoom-adjustable cap.

Now, when I rotate the cap, the lens fixture slides closer / farther to the image sensor. This will be my “zoom” control.


Now I need a place to securely mount the piezo tube, the camera, and a light source. I also need to keep out any outside light, so that the camera’s image sensor isn’t getting any light until the moment when the strobe is engaged. I designed and 3D printed this small lightbox.

The 3D-printed lightbox with inserted piezo jacket. (Not visible here, the lightbox has a hole in the bottom, so fluid jetted from the piezo is not trapped inside.)

The light source will mounted to the back. The piezo tube is housed in a 3D-printed jacket (blue in this rendering) and inserted through the top. The camera has a view through the circular window in the front. I designed a focus ring that controls the distance form the lens to the piezo nozzle (the focal object), and attaches to the front of the lightbox. Here is how everything fits together:

Complete assembly with camera, lens, zoom and focus control, piezo inkjet device, and lighting.

I rotate the outer ring to control object distance (“focus”). I rotate the inner ring to control the distance from the lens to the sensor (“zoom”). It is a little awkward if I really want to control both of these, but I’ve found I keep the zoom fixed and only modify the focus.

Strobe: 50W constant current landscape spotlight LED

Image exposure is a function of the amount of light (we might call this “brightness”) the sensor is exposed to and the amount of time of the exposure. We’re trying to capture an event that takes place in a few microseconds, so we want to strobe our light source for only a few microseconds. We can make up for the short duration of the strobe by increasing the brightness of the light source.

I first tried a 12V LED array. I was able to get some results that might be acceptable, particularly if I post-process the image to improve contrast:

Drop captured with 12V LED array.
Contrast adjusted drop captured with 12V LED array.

But I had some consistency issues and experienced motion blur when I increased the strobe duration to 20+ microseconds.

I tried a few alternatives before landing on this solution. This is an incredibly bright LED array, intended for use in landscape spotlights. It operates at ~30V and 1.5Amps. It requires a constant current driver. Here is a sample image with this light source strobed for 2 microseconds:

Drop formation captured with 50W LED strobed for 2 microseconds.

I also had to diffuse the light. I printed a simple 0.6mm diffuser in white PLA.

High speed switching

Finally, I need to create a switch to strobe the LED.  The LED operates at 30+V and 1.5Amps. I want to control the strobe duration, down to 1 microsecond. Since the supply voltage is significantly higher than logic voltage, the options I considered were relays and MOSFETs.

I experimented with mechanical and solid state DC relays and found that as I approached 1 microsecond strobe durations (attempting to enable supply voltage to the LED for 1 microsecond), the relays failed to switch.

Next, I tried an N Channel MOSFET, configured as a high side switch. The current flow from the Arduino’s GPIO wasn’t sufficient, so I made a PNP transistor circuit as a “pre-switch” so that the source for the MOSFET was the 5V supply VCC for the Arduino.

Final assembly

Here’s how everything comes together:

Rendered assembly of parts that make up the camera and lighting rig.
Camera and lightbox rig in real world, ready for use.

Final parts list for lighting rig

  • Korukesu C1 camera
  • “Jiusion” 1000X microscope
  • 3D printed zoom and focus rings
  • A clothes hanger? Or other source of 2mm 3″ rods.
  • 3D printed lightbox
  • 3D printed diffuser
  • 50W LED array
  • Constant current driver
  • Switch circuit
    • N Channel MOSFET (NTE 2389)
    • BCN547 NPN transistor
    • BCN327 PNP transistor
    • 1 100Kohm resistor
    • 6 10Kohm resistors

The switch is driven by the Arduino Due used in Part 2: Arbitrary Waveform Generator. I’ll discuss the software in more detail in another post.

Waveform Generator

This is my second post in a series on building a DIY Drop-on-Demand Inkjet Platform. In my first post, I broke out the components required for the platform. In this post, I’m building the first: An arbitrary waveform generator. This component produces the signal that drives the piezo (good detailed explanation here). When we’re working with a new material, we’ll tune the wave shape, frequency, and amplitude to produce well-formed drops.

A single drop from a piezo inkjet. The shape of the drop is determined by the shape, frequency, and amplitude of a high voltage waveform that drives deformation of the ceramic piezo tube.

The waveform generator needs to have these characteristics:

  • Able to produce arbitrary waveforms. One of the purposes of this platform is to enable experimentation with different materials and inkjet devices. The ideal waveform may not be a simple sin or triangle wave.
  • Maximum signal frequency of around 100kHz.
  • Signal resolution of at least 8 samples per period at 100kHz.
  • Voltage range: TBD. We’ll amplify this signal to a high voltage later. For now, let’s suppose the signal from the waveform generator needs to run a range of 0-5V.
  • Must have precise timing control and be able to coordinate with other functions (eg. the trigger for the camera strobe that we’ll build in the next step).
  • Must be built with easily available hardware that fits within the overall budget for the platform (which should total less than $1000 US).

The requirement for arbitrary waveforms and need to coordinate with other precisely timed operations means I’ll want to start by generating a signal with a microcontroller and converting to an analog signal with a DAC (digital to analog converter).

I’ll first discuss some naive approaches and why they don’t work.

An i2C DAC module: The MCP4725

The MCP4725 is a available as a breadboard-friendly module from Adafruit. An i2C driver is available. It’s very easy to get started. Here’s some trivial Arduino code to produce a triangle wave with the MCP4725 as fast as we can:

#include <Wire.h>
#include "Adafruit_MCP4725.h"

Adafruit_MCP4725 dac;
#define  SAMPLES   9
#define  CEILING   4095
uint16_t wave[SAMPLES];

void setup(void) {

void loop(void) {
  for (int i=0; i<SAMPLES; i++) {
    dac.setVoltage(wave[i], false);

void CreateTriangleWaveTable() {
  for(int i = 0; i < SAMPLES; i++) {
    int16_t v = (((1.0 / (SAMPLES - 1)) * (SAMPLES - 1 - i*2)) * CEILING);
    if (i > round(SAMPLES/2)) v*=-1;
      wave[i] = v;

Here is the result from the oscilloscope:

Frequency is only 300Hz? We’re limited by i2C speed. The default i2C clock speed is 100kbps. For each 12bit sample, we pay some i2C overhead to transmit an i2C address, a command and 16 data bits (even though the device has 12 bit resolution). We can increase i2C speed to “Full Speed”, 400kbps:


(Do this before Wire.beginTransmission() in Adafruit_MCP4725.cpp.)

Unsurprisingly, that improved our speed by approx. 4X. Not good enough. The i2C protocol specifies a fast mode that is 3.4mbps, but it is not broadly supported. With an Arduino Due, I was only able to push the clock speed up to 2mbps before it became unstable:

And still, just 4kHz. i2C isn’t going to cut it.

Arduino Due with onboard DAC

The Arduino Due has an onboard DAC. It’ll take a small change to the above code to use it:

#define SAMPLES 9
#define CEILING 4095
uint16_t wave[SAMPLES];

void setup(void) {

void loop(void) {
  for (int i=0; i<SAMPLES; i++) {
    analogWrite(DAC0, wave[i]);

void CreateTriangleWaveTable() {
  for(int i = 0; i < SAMPLES; i++) {
    int16_t v = (((1.0 / (SAMPLES - 1)) * (SAMPLES - 1 - i*2)) * CEILING);
    if (i > round(SAMPLES/2)) v*=-1;
    wave[i] = v;

30kHz. Getting closer. The voltage range is now ~400mV to 2.88mV. The Due is a 3.3V device. The default analog reference voltage is 3.3V. The Due’s DAC output range goes from 1/6 to 5/6 of the AREF value, rather than rail-to-rail. Interesting and possibly annoying. We’ll do something about this later.

I still want to get up to 100kHz and also… I’m not satisfied with the resolution of these waveforms. But if I double the resolution (number of samples in a period), it will cut the frequency in half.

Arduino Due with onboard DAC and Direct Memory Addressing

In my naive examples so far, I’ve just output values as fast as I can in the main loop. If I want to control other capabilities with this microcontroller, it will interfere with the waveform output. Also, the operations required to look up a value in the wave table and perform analogWrite are too slow. We want a solution that uses timer interrupts to free up the main loop and enables faster (perhaps buffered) access to the wave table to improve speed.

The Arduino Due is built on the Atmel SAM3X8E Cortex M3, which has an onboard Direct Memory Access (DMA) controller. We’ll use this to buffer our waveform samples onto a hardware register so we can read them very quickly when needed. We’ll also set up the DAC to be triggered by a timer interrupt.

#define  SAMPLES   9
#define  CEILING   4095
uint16_t wave[SAMPLES];

// Incantations for DAC set-up for analogue wave using DMA and timer interrupt.
// http://asf.atmel.com/docs/latest/sam3a/html/group__sam__drivers__dacc__group.html
void setupDAC() {
  pmc_enable_periph_clk (DACC_INTERFACE_ID) ;   // Start clocking DAC.
  dacc_set_transfer_mode(DACC, 0);
  dacc_set_power_save(DACC, 0, 1);              // sleep = 0, fast wakeup = 1
  dacc_set_analog_control(DACC, DACC_ACR_IBCTLCH0(0x02) | DACC_ACR_IBCTLCH1(0x02) | DACC_ACR_IBCTLDACCORE(0x01));
  dacc_set_trigger(DACC, 1);
  dacc_set_channel_selection(DACC, 0);
  dacc_enable_channel(DACC, 0);
  dacc_enable_interrupt(DACC, DACC_IER_ENDTX);
  DACC->DACC_PTCR = 0x00000100;

void DACC_Handler(void) {
  DACC->DACC_TNPR = (uint32_t) wave;
  DACC->DACC_TNCR = SAMPLES;                // Number of counts until Handler re-triggered

// System timer clock set-up for DAC wave.
void setupTC (float freq_hz) {  
  int steps = (420000000UL / freq_hz) / (10*SAMPLES);
  TcChannel * t = &(TC0->TC_CHANNEL)[0];
  t->TC_CCR = TC_CCR_CLKDIS;                // Disable TC clock.
  t->TC_SR;                                 // Clear status register.
  t->TC_CMR =                               // Capture mode.
              TC_CMR_TCCLKS_TIMER_CLOCK1 |  // Set the timer clock to TCLK1 (MCK/2 = 84MHz/2 = 48MHz).
              TC_CMR_WAVE |                 // Waveform mode.
              TC_CMR_WAVSEL_UP_RC;          // Count up with automatic trigger on RC compare.
  t->TC_RC = steps;                         // Frequency.
  t->TC_RA = steps /2;                      // Duty cycle (btwn 1 and RC).
  t->TC_CMR = (t->TC_CMR & 0xFFF0FFFF) | 
              TC_CMR_ACPA_CLEAR |           // Clear TIOA on counter match with RA0.
              TC_CMR_ACPC_SET;              // Set TIOA on counter match with RC0.
  t->TC_CCR = TC_CCR_CLKEN | TC_CCR_SWTRG;  // Enables the clock if CLKDIS is not 1.

void setup() {
  float freq_hz = 200000; // Target: 200kHz

void loop() {}

void CreateTriangleWaveTable() {
  for(int i = 0; i < SAMPLES; i++) { int16_t v = (((1.0 / (SAMPLES - 1)) * (SAMPLES - 1 - i*2)) * CEILING); if (i > round(SAMPLES/2)) v*=-1;
    wave[i] = v;

Note that I’ve configured to target 200kHz. Here’s the view from the scope:

Actual frequency is 180kHz (That appears to be the max for any target). So this works. Now I want to address the voltage range issue. I’m adding a pair of LMV358 op amps configured for 10X gain. I’ll reduce the CEILING in the above code, so the amplitude of our wave from the DAC is 0.5V.

Also, for the image above, I reduced the target frequency to 50kHz and increased the resolution to 31 samples for a smoother wave.

Looking good. Finally, Let’s prove that we can produce arbitrary waveforms. Here is a house with a chimney at 50kHz:

That’s it. The code above is the basis for an Adruino Due-powered arbitrary waveform generator that meets the stated requirements. Next up, a camera and lighting rig to capture drop formation in microsecond time.

DIY Drop-on-Demand Inkjet Platform

My goal with this project is to build a hobby-grade inkjet platform, from accessible parts, for <$1000, that will enable myself and anyone to experiment with a piezo tube dispenser and different fluids. These types of lab setups are commercially available and commonly used in academia and industry, but prohibitively expensive for most of us to approach.

I previously experimented with piezo inkjets here. I moved away from the Xaar 128 because I found it’s really built to work with a narrow range of solvent inks and not easily repurposed for other materials. For example, the waveform that drives the piezo is generated on-device. The voltage, frequency, and shape of the waveform can’t be altered, so a fluid with different viscosity or surface tension may fail to jet.

Microfluidics is a broad field with applications in manufacturing, printing, and biology. My aim is for this platform to assist people in building hardware and experimenting with fluids for “drop-on-demand” applications that require precisely depositing extremely small volumes (pico- or nano-liters) of fluid.

Example drop formations demonstrate consistency of setup: 3 subsequent sample drops of 91% IPA, produced with 20khz triangle wave, captured with 2us strobe, 250us after wave pulse.

A lab setup has these parts:

  • The piezo inkjet fluid dispenser.
  • A waveform generator. Piezo devices work by deforming when exposed to an electric field. The shape and frequency of the waveform dictate how the piezo deforms and how much force is generated.
  • A high voltage amplifier. These types of piezo cylinders are effective in a voltage range of 30V-100V.
  • A fluid reservoir and handling to deliver fluid to the dispenser.
  • For analysis of new materials, a camera and strobe setup capable of capturing events within a few microseconds.

I plan to source or build each of these parts for a reproducible lab setup for <$1000 and share plans on github. I’m not done yet and I don’t know how far I will get. I’ll share progress on each part in future posts.

Brain Puzzle

A few weeks back, I saw a great video from Maker’s Muse on YouTube, describing how to make custom 3×3 puzzles (eg. a Rubik’s cube) out of any 3 dimensional model. I’ve always been curious about how these puzzles work, so what better way to learn than to make my own?

Final puzzle. 3D printed and painted.

Yup. It’s a brain.

From brain scan to brain surgery

I started with this brain model from Thingiverse. These puzzles work best when there is good symmetry across X/Y/Z axes, so I scaled the model asymmetrically to improve its overall symmetry.

Scaled brain with deep folds and crevices.

I had to create a solid “core” for the model, because there is a large gap between lobes and several deep folds that would have made it impossible to divide the model into contiguous puzzle pieces. To create this core, I used MeshLab to create a “bubble shell“.

The bubble shell core.
Original model with bubble shell core.

Next, I took the union of the scaled model and core, then took the boolean difference with the 3×3 puzzle template from Maker’s Muse.

Model after boolean difference with Maker’s Muse 3v3 template applied. Each puzzle piece has chamfered edges to improve movement.

Models generated from 3d scans are often really messy and if they aren’t repaired, mesh operations fail. When I have problems with bad meshes, I often turn to Netfabb to repair the meshes and move on. It turned out that Netfabb’s “standard” repair operations weren’t good enough for this model. I had to use the full version for “extended” repair. I also used Netfabb for all of the boolean operations.

Final model with painted folds. This model would be very hard to solve if it monochrome.

When the modelling was finished, I printed all of the parts on a Prusa i3 Mk3 (about 24 hours of print time for one puzzle). I then sanded and primed the unassembled parts. I assembled the puzzle with springs and M3 screws. Finally, I painted the interior of the folds on each “side” with acrylic paint.

Final puzzle. Printed and painted.
Interior view of model. Pieces are centers, edges, or corners. Centers connect to a 6-side core (not pictured) with screws and springs.


  • It was great to get hands-on to really see how these puzzles work. It’s a very clever design. I really recommend this Maker’s Muse video to see more details on the mechanics.
  • The movement on this puzzle is good (not great). It particularly helps to apply silicone lubricant periodically.
  • This puzzle is a little harder than a 3×3 cube. Since there isn’t symmetry in the center pieces, their orientation matters.
Well, this is embarrassing. Somehow, this is the only photo I have of the puzzle unsolved.

Meural Remote

I mentioned in my Digital Gallery Wall post that it would be easy to build a remote control for Meural Canvases. Here it is:

Meural Remote: Case printed on Prusa i3 Mk3. Button cover printed on Formlabs Form 2 with flexible resin. (Case is unfinished and should be sanded, primed, and painted).

This was super easy because each Meural Canvas is wifi-connected and has a tiny webserver. The commands are exposed through a REST interface. So if you know the local IP address of your Meural device, you can execute these commands from your web browser:

ON: /remote/control_command/resume

OFF: /remote/control_command/suspend

LEFT: /remote/control_command/set_key/left/

RIGHT: /remote/control_command/set_key/right/


The remote is based on an ESP8266. These are versatile microcontrollers with onboard wifi. For this project, I knew I wanted battery power and that I wanted to recharge the battery via usb, so I wanted a board with a charge controller. I opted for this one from DFRobot (see below for an alternative suggestion).


There are a lot of options for programming the ESP8266. For this project, I chose NodeMCU, a Lua-based firmware. I’ve used NodeMCU for a few projects. I have mixed feelings about Lua, but I really like having an interpreter when I’m debugging a new hardware project.

There’s great documentation for NodeMCU, so I won’t get into it in detail. But you will need to flash a custom NodeMCU build with the HTTP module. (I recommend letting NodeMCU Custom Builds create your build. Keep all of the default modules and add HTTP).


The circuit is very simple. I built this on a prototype board designed to fit the ESP8266 board from DFRobot. There are 4 momentary switches (for each command: on, left, right, off). For each of these, one leg is connected to a GPIO pin. The other is connected to ground (the ESP8266 has built-in pull-ups). I also added a status LED to indicate when buttons are pressed and to blink when we’re waiting for WIFI connection.


See my repo on Github.

Thoughts and learnings:

  • I didn’t give any consideration to power management for this project. The remote is always connected to wifi and draining >100mA/h. With a 800mAh LIPO battery, I’ve got less than 8 hours of charge. At the cost of some latency, the ESP8266 could be put to sleep and wake up / reconnect to wifi on button press.
  • NodeMCU is not multi-threaded. When I want to send a command to all 6 Meural devices, I have to connect to each in sequence and wait for an OK after issuing a command. It takes about half a second for each device, so the sequence is very visible.
  • Alternative hardware: One thing I don’t like about the DFRobot board is that the charge controller delivers 500mA and I can’t change it. For safety, this means the connected battery should be 500mAh or higher. The battery increased the size of my design quite a bit. Adafruit’s Feather Huzzah ESP8266 has a 100mA LIPO charger and may be a good alternative.

The Chocovibe CV100

We make bean-to-bar chocolate in our kitchen and I’m often trying out different ideas to improve our process. The Chocovibe CV100 is a vibration table for molding tempered chocolate.

Tempered chocolate has distinctive shine and appealing texture. A temper is achieved by heating and cooling the chocolate to precise points where certain crystal structures form and can be maintained.

When it’s ready to mold, dark chocolate is just barely warm enough to flow.

To level chocolate and ensure it fills a mold evenly, we often lift and drop the molds several times. It’s tedious, messy, and doesn’t always work as the chocolate cools.

The Chocovibe CV100 is an experimental vibration table cobbled together from scrap plywood, a silicone mat, springs, screws, nuts, a vibration motor, and an ESP8266 microcontroller (yes … it has wifi).

It quickly levels the chocolate. The vibration also helps nibs or other toppings sink into the bars. We’ve used it a couple of times so far and it’s a real help to our process. I may find myself building a more kitchen-friendly version of this in the future.

Parametric Coral Tubes, Concrete Wall Sculpture

Final. Framed and mounted above fireplace.
Closeup. This piece looks most interesting from sharp angles.

Build process: Modeling.

Modeled 3 different tube shapes using Grasshopper in Rhino3D.
This will be a wall hanging, so this is the head-on rendering.
An angle view.
Another angle view.

Build process: Silicone molds.

Positives printed on Formlabs SLA printer.
For each part, I 3d printed a casting sleeve and interior structural support.
Hot glue was used to seal the sleeve seam and attach the sleeve to a smooth surface — hot glue seals well and is easy to remove. The sleeve and model positive were coated lightly with Vaseline.
Since the models have some deep undercuts, I cast with a high grade silicone resin (Tap Plastics Platinum Silicone Resin). This is a soft and tear-resistant resin.

Build process: Concrete.

I reused the sleeves and insert to improve stability when casting concrete.
I used Buddy Rhodes counter top grade concrete mix with a generous amount of Owens Corning reinforcement fibers. I typically mixed concrete for 6 molds at one time, making the mixture wet enough to flow. After fully mixed, I moved the mixture to a large plastic Ziploc bag, cut one corner, and piped into molds.

Build process: Mounting and framing.

After pouring about 60 parts, I stained a 24″ x 24″ plywood board. I marked the layout grid with chalk, and attached each part with epoxy.
I made a roughly 5′ x 5′ “frame out of 5.5” walnut boards, using dowel joinery.
You can never have enough clamps.
Photo of back of joined main piece and frame. They are attached with 20 dowels (photo is from a test, some only a couple of dowels are inserted).

Digital Gallery Wall

Final digital gallery wall.
Cycling images.
Cycling images again.

My wife is a serious amateur photographer. A few years ago, we created a photo wall in her office to showcase her framed images. We always intended to swap out the images with new photos over time, but 4 years later, the same images were in these frames…

We thought about creating a digital photo wall that’s easy to update and can potentially show many more images. I bought a Meural Canvas digital frame a few months back to try it out and compare it to other options. The Meural Canvas is a 27″ 1080p LCD display wrapped in an attractive wooden frame and matte. There is a film applied to the LCD panel that improves the display. In daylight conditions, it doesn’t look like an LCD display and most people would be fooled into thinking it’s an ordinary framed image.

Meural devices have an onboard controller that connects to WIFI, so there would be no need to connect to an external display controller. They are ready-to-mount, so would require minimal hardware or wall preparation.

The Merual Canvas looked good, so we decided to make a Gallery Wall with 6 Meural Canvases.


The biggest challenge was getting power to the devices. The Meural Canvas ships with a cloth power cord and large DC transformer. I didn’t want to dangle 6 cords to the floor and have a pile of transformers.

Bulky Meural transformers and cords.

Options I considered:

  • Tear apart the 100-year-old plaster wall to route low voltage power behind the wall.
  • Carve cable-routing channels into a large sheets of 1/2″ or 3/4″ MDF, mount to the wall, and paint it to blend in with the wall.
2-Conductor 16 AWG Ghost Wire on roll.

Then I found another option: Ghost Wire is flat low-voltage wire that adheres to the surface of your wall and can be finished to blend in seamlessly. They offer a 2-channel 16-gauge product that’s about 2″ wide and a little thicker than masking tape. Will it work?

A Meural Canvas runs on 12V. I measured the current consumed by a single Meural Canvas. Typical was ~450mA. Peak was 1600mA (at maximum brightness). The 16 AWG Ghost Wire product is rated up to 10A. My maximum run length is less than 7ft. If I run three devices per channel (typical 1.35A, peak 4.8A), we’ll have a maximum voltage drop of about 1% and typically 0.33%. This should work.

I opted for 2 parallel channels with 3 frames each. I used a single 200w switching DC transformer to power.


Next, I mounted the devices to the wall with the provided cleats from Meural and I had two problems:

  • The displays weren’t uniformly flush to the wall. The mounting cleats seemed to hug the wall tighter on one side than the other. This meant that a frame may hug the wall on the left and float out an inch on the right. I don’t think I’d notice if I were mounting a single canvas, but it was obvious and unattractive when I mounted several frames side-by-side.
  • I wanted some extra space behind each frame for Ghost Wire connectors and additional wiring.

I solved these issues by mounting the cleat to a 3/4″ plywood standoff. I made the standoffs 20″ wide and attached with 5 drywall anchors each. The additional width and rigidity made it easy to level and keep flush. One of the screws in each cleat is in a wall stud.

Hiding the GhostWire seams with drywall mud. We followed by sanding and painting.
View of mounting cleats and stand-offs, plus wiring for each display after cleaning up GhostWire seams and painting. All of this will be hidden behind the frames.

What I like about the Meural Canvas for a multi-display gallery wall:

  • Attractive frame and matte. Looks like a frame rather than an electronic device. Ready to mount.
  • Very nice display, clearly tuned for this application. Makes photos look better and more natural than an off-the-shelf 4K display.
  • Reasonably good mobile and web apps. We only intend to display our own images. It’s easy to upload and manage image collections.
  • Each device connects to WIFI. Setup is easy. Each device even has a small web server with REST interface for commands, so it will be easy to make a remote or add voice-assist features for Google Home or Alexa (a practical consideration when you have 6 displays).

What I didn’t like:

  • The device itself takes 12V DC power. It comes with a very large transformer. Meural made an attempt to make an attractive cloth cord, but it still looks like a cord and casts a shadow. For future models, I hope they offer a flat cord option and perhaps a more compact DC transformer.
  • Auto-brightness and standby features don’t poll frequently enough (maybe hourly?) and work differently for different displays. For example, one of six displays may go into standby because it thinks the room is dark at 4PM. What gives?
  • 16:9. Every digital display I looked at had a 16:9 aspect ratio. Photos are typically 4:3 or 3:2. Obviously, the manufacturers are using standard LCD panels, but it’s annoying to have to crop (or let the Meural autocrop) all of our images.
  • I don’t love the mounting cleat. When mounted in landscape orientation, the Meural is 29.5″ wide and the cleat is ~3″. It feels flimsy and isn’t wide enough to level the frame properly. I think an appropriate cleat that could support portrait and landscape orientations would be 12″ wide.

A Bluetooth Mouse (for Cats)

Idea: My cat has a “bristle-bot” style toy, but it’s not a favorite. He’ll watch it as it wanders randomly around the floor, but he doesn’t really engage with a toy unless it “hides” — goes behind other objects so he can strategize about where it’s going to show up next.

The style of motion in these robots is kind of neat. There are no wheels. Instead, they operate with vibration.  There’s something insect-like about the movement.

Can I make a bristle-bot toy that I can control with my phone so my cat and I can have fun together? Yes… Well, I made a toy. I didn’t succeed in engaging my cat.

A Bluetooth Mouse for Cats in Lab-mouse White.I experimented with a couple of different designs. The parts list for this version includes:

  • Bluetooth controller (I used Redbear Labs’ BLE Nano)
  • 2 6mm 3v disc-style vibration motors for motion.
  • 2 LEDs for eyeballs and status indication (disconnect: flash, connected: solid).
  • A pair of transistors for switching current to the motors.
  • A small LIPO battery (150 maH).
  • A power switch.

The design also includes a 3d-printed mouse body and a custom PCB to keep everything compact.

Interior view.

The operating principle is that since vibration motors are mounted to the sides of the body, when a motor is engaged, the vibration will cause the legs on one side of the body to flex. If both motors operate at roughly the same frequency, engaging both motors simultaneously will move the mouse forward.

Bottom view. I left the bottom open for convenience while I was iterating on the design. I’ve also found that minimizing the amount of structure improves the amount of vibration transferred to the legs.

I designed parts in Rhino 3d with Grasshopper. I 3d printed parts with a Form Labs Form 2, using the standard Grey resin.

Render of 3d model, using transparency to illustrate 2 distinct parts: platform (with legs) and shell. 
Lights indicate bluetooth device is connected.
Sammy is somewhat interested in the mouse.


  • While the 2 distinct vibration motors offer some control over the direction of the mouse, it’s not particularly precise. It’s hard to steer around objects.
  • In practice, any vibration motors I tried seemed to be somewhat unbalanced (presumably operating at different frequencies), so motion is biased to one side.
  • Battery wiring initially took up a lot of space. I had to trim the leads from the LIPO battery a recrimp the JST connector.  This was a pain to learn how to do. The Engineer PA-09 Micro Connector Crimpers turned out to be the right tool.

Hopper for Cocoa Nibs / Liquor Extraction

Chocolate making is messy business. One of the steps in nib-to-bar chocolate making is to extract the liquor from the nibs. We use a Champion Juicer.

Inevitably, when you add nibs to the chute … a lot of them fly back up the chute and land all over the kitchen.

So I built this hopper and plunger system to help. It has three parts:

  • A collar that fits on top of the chute and enables nibs to be added perpendicular to the chute..
  • A hopper that holds about a cup of nibs that are gravity-fed into the collar and chute.
  • A extended plunger to push nibs down the chute.

To operate: Lower the plunger to cover the collar opening, add nibs, lift the plunger to open the collar opening and gravity-feed some nibs, lower the plunger when the chute is part-way full. No nibs should escape.

Rendering of distinct parts.

I printed the parts on a Form Labs Form 2 SLA printer. Form Labs doesn’t make a food safe resin (though they do make dental-grade resins). In fact, I’m not aware of any food-grade resins or filaments for 3d-printing. This is a topic the Internets have a lot of opinions about. I chose to coat parts in many layers of poly-urethane, which is food-safe when fully cured.