I started with this brain model from Thingiverse. These puzzles work best when there is good symmetry across X/Y/Z axes, so I scaled the model asymmetrically to improve its overall symmetry.
I had to create a solid “core” for the model, because there is a large gap between lobes and several deep folds that would have made it impossible to divide the model into contiguous puzzle pieces. To create this core, I used MeshLab to create a “bubble shell“.
Next, I took the union of the scaled model and core, then took the boolean difference with the 3×3 puzzle template from Maker’s Muse.
Models generated from 3d scans are often really messy and if they aren’t repaired, mesh operations fail. When I have problems with bad meshes, I often turn to Netfabb to repair the meshes and move on. It turned out that Netfabb’s “standard” repair operations weren’t good enough for this model. I had to use the full version for “extended” repair. I also used Netfabb for all of the boolean operations.
When the modelling was finished, I printed all of the parts on a Prusa i3 Mk3 (about 24 hours of print time for one puzzle). I then sanded and primed the unassembled parts. I assembled the puzzle with springs and M3 screws. Finally, I painted the interior of the folds on each “side” with acrylic paint.
It was great to get hands-on to really see how these puzzles work. It’s a very clever design. I really recommend this Maker’s Muse video to see more details on the mechanics.
The movement on this puzzle is good (not great). It particularly helps to apply silicone lubricant periodically.
This puzzle is a little harder than a 3×3 cube. Since there isn’t symmetry in the center pieces, their orientation matters.
I mentioned in my Digital Gallery Wall post that it would be easy to build a remote control for Meural Canvases. Here it is:
This was super easy because each Meural Canvas is wifi-connected and has a tiny webserver. The commands are exposed through a REST interface. So if you know the local IP address of your Meural device, you can execute these commands from your web browser:
The remote is based on an ESP8266. These are versatile microcontrollers with onboard wifi. For this project, I knew I wanted battery power and that I wanted to recharge the battery via usb, so I wanted a board with a charge controller. I opted for this one from DFRobot (see below for an alternative suggestion).
There are a lot of options for programming the ESP8266. For this project, I chose NodeMCU, a Lua-based firmware. I’ve used NodeMCU for a few projects. I have mixed feelings about Lua, but I really like having an interpreter when I’m debugging a new hardware project.
There’s great documentation for NodeMCU, so I won’t get into it in detail. But you will need to flash a custom NodeMCU build with the HTTP module. (I recommend letting NodeMCU Custom Builds create your build. Keep all of the default modules and add HTTP).
The circuit is very simple. I built this on a prototype board designed to fit the ESP8266 board from DFRobot. There are 4 momentary switches (for each command: on, left, right, off). For each of these, one leg is connected to a GPIO pin. The other is connected to ground (the ESP8266 has built-in pull-ups). I also added a status LED to indicate when buttons are pressed and to blink when we’re waiting for WIFI connection.
I didn’t give any consideration to power management for this project. The remote is always connected to wifi and draining >100mA/h. With a 800mAh LIPO battery, I’ve got less than 8 hours of charge. At the cost of some latency, the ESP8266 could be put to sleep and wake up / reconnect to wifi on button press.
NodeMCU is not multi-threaded. When I want to send a command to all 6 Meural devices, I have to connect to each in sequence and wait for an OK after issuing a command. It takes about half a second for each device, so the sequence is very visible.
Alternative hardware: One thing I don’t like about the DFRobot board is that the charge controller delivers 500mA and I can’t change it. For safety, this means the connected battery should be 500mAh or higher. The battery increased the size of my design quite a bit. Adafruit’s Feather Huzzah ESP8266 has a 100mA LIPO charger and may be a good alternative.
When it’s ready to mold, dark chocolate is just barely warm enough to flow.
To level chocolate and ensure it fills a mold evenly, we often lift and drop the molds several times. It’s tedious, messy, and doesn’t always work as the chocolate cools.
The Chocovibe CV100 is an experimental vibration table cobbled together from scrap plywood, a silicone mat, springs, screws, nuts, a vibration motor, and an ESP8266 microcontroller (yes … it has wifi).
It quickly levels the chocolate. The vibration also helps nibs or other toppings sink into the bars. We’ve used it a couple of times so far and it’s a real help to our process. I may find myself building a more kitchen-friendly version of this in the future.
My wife is a serious amateur photographer. A few years ago, we created a photo wall in her office to showcase her framed images. We always intended to swap out the images with new photos over time, but 4 years later, the same images were in these frames…
We thought about creating a digital photo wall that’s easy to update and can potentially show many more images. I bought a Meural Canvas digital frame a few months back to try it out and compare it to other options. The Meural Canvas is a 27″ 1080p LCD display wrapped in an attractive wooden frame and matte. There is a film applied to the LCD panel that improves the display. In daylight conditions, it doesn’t look like an LCD display and most people would be fooled into thinking it’s an ordinary framed image.
Meural devices have an onboard controller that connects to WIFI, so there would be no need to connect to an external display controller. They are ready-to-mount, so would require minimal hardware or wall preparation.
The Merual Canvas looked good, so we decided to make a Gallery Wall with 6 Meural Canvases.
The biggest challenge was getting power to the devices. The Meural Canvas ships with a cloth power cord and large DC transformer. I didn’t want to dangle 6 cords to the floor and have a pile of transformers.
Options I considered:
Tear apart the 100-year-old plaster wall to route low voltage power behind the wall.
Carve cable-routing channels into a large sheets of 1/2″ or 3/4″ MDF, mount to the wall, and paint it to blend in with the wall.
Then I found another option: Ghost Wire is flat low-voltage wire that adheres to the surface of your wall and can be finished to blend in seamlessly. They offer a 2-channel 16-gauge product that’s about 2″ wide and a little thicker than masking tape. Will it work?
A Meural Canvas runs on 12V. I measured the current consumed by a single Meural Canvas. Typical was ~450mA. Peak was 1600mA (at maximum brightness). The 16 AWG Ghost Wire product is rated up to 10A. My maximum run length is less than 7ft. If I run three devices per channel (typical 1.35A, peak 4.8A), we’ll have a maximum voltage drop of about 1% and typically 0.33%. This should work.
I opted for 2 parallel channels with 3 frames each. I used a single 200w switching DC transformer to power.
Next, I mounted the devices to the wall with the provided cleats from Meural and I had two problems:
The displays weren’t uniformly flush to the wall. The mounting cleats seemed to hug the wall tighter on one side than the other. This meant that a frame may hug the wall on the left and float out an inch on the right. I don’t think I’d notice if I were mounting a single canvas, but it was obvious and unattractive when I mounted several frames side-by-side.
I wanted some extra space behind each frame for Ghost Wire connectors and additional wiring.
I solved these issues by mounting the cleat to a 3/4″ plywood standoff. I made the standoffs 20″ wide and attached with 5 drywall anchors each. The additional width and rigidity made it easy to level and keep flush. One of the screws in each cleat is in a wall stud.
What I like about the Meural Canvas for a multi-display gallery wall:
Attractive frame and matte. Looks like a frame rather than an electronic device. Ready to mount.
Very nice display, clearly tuned for this application. Makes photos look better and more natural than an off-the-shelf 4K display.
Reasonably good mobile and web apps. We only intend to display our own images. It’s easy to upload and manage image collections.
Each device connects to WIFI. Setup is easy. Each device even has a small web server with REST interface for commands, so it will be easy to make a remote or add voice-assist features for Google Home or Alexa (a practical consideration when you have 6 displays).
What I didn’t like:
The device itself takes 12V DC power. It comes with a very large transformer. Meural made an attempt to make an attractive cloth cord, but it still looks like a cord and casts a shadow. For future models, I hope they offer a flat cord option and perhaps a more compact DC transformer.
Auto-brightness and standby features don’t poll frequently enough (maybe hourly?) and work differently for different displays. For example, one of six displays may go into standby because it thinks the room is dark at 4PM. What gives?
16:9. Every digital display I looked at had a 16:9 aspect ratio. Photos are typically 4:3 or 3:2. Obviously, the manufacturers are using standard LCD panels, but it’s annoying to have to crop (or let the Meural autocrop) all of our images.
I don’t love the mounting cleat. When mounted in landscape orientation, the Meural is 29.5″ wide and the cleat is ~3″. It feels flimsy and isn’t wide enough to level the frame properly. I think an appropriate cleat that could support portrait and landscape orientations would be 12″ wide.
Idea: My cat has a “bristle-bot” style toy, but it’s not a favorite. He’ll watch it as it wanders randomly around the floor, but he doesn’t really engage with a toy unless it “hides” — goes behind other objects so he can strategize about where it’s going to show up next.
The style of motion in these robots is kind of neat. There are no wheels. Instead, they operate with vibration. There’s something insect-like about the movement.
Can I make a bristle-bot toy that I can control with my phone so my cat and I can have fun together? Yes… Well, I made a toy. I didn’t succeed in engaging my cat.
A Bluetooth Mouse for Cats in Lab-mouse White.I experimented with a couple of different designs. The parts list for this version includes:
Bluetooth controller (I used Redbear Labs’ BLE Nano)
2 6mm 3v disc-style vibration motors for motion.
2 LEDs for eyeballs and status indication (disconnect: flash, connected: solid).
A pair of transistors for switching current to the motors.
A small LIPO battery (150 maH).
A power switch.
The design also includes a 3d-printed mouse body and a custom PCB to keep everything compact.
The operating principle is that since vibration motors are mounted to the sides of the body, when a motor is engaged, the vibration will cause the legs on one side of the body to flex. If both motors operate at roughly the same frequency, engaging both motors simultaneously will move the mouse forward.
I designed parts in Rhino 3d with Grasshopper. I 3d printed parts with a Form Labs Form 2, using the standard Grey resin.
While the 2 distinct vibration motors offer some control over the direction of the mouse, it’s not particularly precise. It’s hard to steer around objects.
In practice, any vibration motors I tried seemed to be somewhat unbalanced (presumably operating at different frequencies), so motion is biased to one side.
Battery wiring initially took up a lot of space. I had to trim the leads from the LIPO battery a recrimp the JST connector. This was a pain to learn how to do. The Engineer PA-09 Micro Connector Crimpers turned out to be the right tool.
Inevitably, when you add nibs to the chute … a lot of them fly back up the chute and land all over the kitchen.
So I built this hopper and plunger system to help. It has three parts:
A collar that fits on top of the chute and enables nibs to be added perpendicular to the chute..
A hopper that holds about a cup of nibs that are gravity-fed into the collar and chute.
A extended plunger to push nibs down the chute.
To operate: Lower the plunger to cover the collar opening, add nibs, lift the plunger to open the collar opening and gravity-feed some nibs, lower the plunger when the chute is part-way full. No nibs should escape.
I printed the parts on a Form Labs Form 2 SLA printer. Form Labs doesn’t make a food safe resin (though they do make dental-grade resins). In fact, I’m not aware of any food-grade resins or filaments for 3d-printing. This is a topic the Internets have a lot of opinions about. I chose to coat parts in many layers of poly-urethane, which is food-safe when fully cured.
I use a Handibot for CNC woodwork. If you aren’t familiar, Handibot is a portable CNC router that you place on top of your workpiece to make pre-programmed cuts. It can cut the same designs and perform most of the same tasks as a full sized CNC machine, but compared to full-size CNC machines, it’s compact — small enough to pick up and move around. Since the machine sits on top of a workpiece, the size of project it can tackle is virtually unlimited. The downside of the compact design is that it only cuts 6″x8″ at one time before the operator must physically lift and reposition the machine for the next “tile” of a cut. So for a large project on a 4’x8’ sheet, you may reposition and register the Handibot 96 times.
Today, when I’m performing multi-part cuts, I use a custom rigid jig to register the position for each cut. These jigs aren’t bad, but they require building a new tool, possibly with access to different machinery, need additional workspace, and physically limit the size of my projects.
Idea: Optical Registration
I’m going to try an experiment. Can I improve my experience with the Handibot by performing multi-part cuts using computer vision for registration? In short, can I use computer vision to capture a full view of my workpiece (ie: “scan” the workpiece), then identify the target and current position of the machine on the work surface, for each cut, with high precision (+/- 0.01”)? And will precise computer vision-based registration make for a better user experience?
Full disclosure: I chose this project as an opportunity to learn about computer vision and get experience with OpenCV. My goal was not to compare and find an ideal registration method, so I did not consider alternatives. That said, I don’t believe my precision requirements could be met with alternatives including LIDAR, light-based proximity sensors (eg. the popular Sharp proximity sensors), or ultrasonic sensors. If you disagree, and can point me to a good alternative to a camera-based solution, please let me know.
You can use and modify Handicam yourself, and I’ll provide more details on GitHub. For this post, I’m jumping straight to findings.
Good news: We can create an optical registration solution for the Handibot with better than 1/100” precision. Doing so requires some custom parts, including a camera mount and Aruco marker board. It also requires a good camera with control over focus and exposure and reasonably good lighting. The code on GitHub is available as a proof of concept.
Now the bad news: Using Handicam requires manually placing the machine for each tile. Accurately maneuvering the Handibot, which weighs about 50lbs and has an anti-skid rubber bottom, is difficult and cumbersome. So, for example, while Handicam can provide feedback that the Handibot needs to be moved 1/32” on the X-axis, lifting and moving the machine that distance by hand, within a tolerance of 1/100”, takes a lot of frustrating trial and error. In my experience, I could do it, but it took minutes to adjust the position for each tile.
With access to Handicam, I still prefer a rigid jig. A jig gives me reasonable precision and significantly less effort per tile. That said, there are some future work areas that could greatly improve the solution and make it viable:
On demand single-tile CAM, based on the Handibot’s current position: Rather than pre-program a grid of “tiles” for a complete multi-part cut then force the user to accurately position the machine for each tile, generate the cut instructions for any position that the user places the machine, on demand. This way, the user can roughly move the machine to where a cut is needed, without fussing about accurate placement, but still get a precisely aligned cut according to the machine’s measured position on the workpiece.
Autonomous movement for the Handibot: Turn the Handibot into a mobile robot, using wheels, tracks, or belts for accurate repositioning. Since I started work on Handicam, I’ve learned about some new options for CNC routing with autonomous motion. Considering these new options, I think that the Handibot and its tile-based approach for large cuts may still offer some advantages: The weight and anti-skid features that make for tedious manual repositioning between cuts are actually a virtue at cut time, because they ensure the machine stays precisely aligned in spite of large forces on the router that could move the whole machine. Further, the tile-at-a-time approach enables the user to choose to add additional work-holding when needed.
I don’t currently have any concrete plans to pursue these ideas, but if I a tinker more, I’ll try to share. Thanks!
Here’s a simple vibration table for making concrete tiles, made with a Black and Decker Mouse, some screws, nuts, washers, and compression springs, and scrap wood.
The springs I used are 1 3/4″, 1/2″ OD, .054WG, but any reasonable gauge should work. Tighten the screws to create a little compression. Use a pair of nuts on each screw to keep the nut from backing off under vibration.
I mix concrete to a “wet sand” texture and trowel into molds. The table helps create an even distribution with no voids and levels out the open surface of the mold.
Mecanum wheels are cool. Each wheel is composed of a series of rollers, pitched at 45 or -45 degrees. When moving forwards and backwards, the rollers do not engage, but when the front and back wheels rotate in opposite directions, the rollers engage to move the vehicle left or right.
These wheels are 3d printed, except for machine screws and bearings.
The vehicle in the video was as basic as I could make it. Electronics are an ESP8266 for control and 2 dual H bridge motor controllers. The ESP8266 has 9 usable I/O pins, just enough to control 4 motors if they share a PWM pin.
The 2nd ESP8266 you might see in the video is running ESPLink and acting like a WIFI serial port for remote code updates.
Some learnings from building:
This was my 2nd time using Ninjaflex and Semiflex. It’s a challenge to print, because it’s so soft that it easily flexes inside the filament drive of an FDM printer, instead of being forced down the thermal barrier tube. It requires some babysitting to catch issues. If I plan to use this a lot, I’ll look into modifying my extruder.
For the rigid parts, I used Form Labs Gray Resin, which is designed for prototyping. It has some nice properties. It cures easily and sands well. It was a bit brittle and since the bearings were “force fit” into place with pliers, I managed to crack some of the bearing holders during assembly. I fixed some of these by just applying some resin to the crack and curing under light.
I *may* have had trouble with some bearing holes due to bad tolerances from my curing process. To date, I’ve just washed my parts by dipping and sloshing in Yellow Magic and IPA, then curing in open air under a light intended for curing acrylic nails. Maybe it’s time to try an ultrasonic bath and underwater curing.
Some learnings about Mecanum wheels:
Motion in the video may look smooth, but if you look closely, you can see some bouncing. When driving, you can definitely hear “clacking” as the rollers come in contact with the surface. This suggests that each wheel is often losing contact with the surface, so the traction isn’t great. A larger number of rollers would help.
Mecanum wheels are known to operate poorly when the weight distribution of the vehicle isn’t uniform, or under too much weight. I experimented with adding up to 15lbs of weight to the vehicle and it definitely had problems. Particularly, side-to-side motion stalled or became erratic.