Build process: Modeling.
Build process: Silicone molds.
Build process: Concrete.
Build process: Mounting and framing.
Build process: Modeling.
Build process: Silicone molds.
Build process: Concrete.
Build process: Mounting and framing.
My wife is a serious amateur photographer. A few years ago, we created a photo wall in her office to showcase her framed images. We always intended to swap out the images with new photos over time, but 4 years later, the same images were in these frames…
We thought about creating a digital photo wall that’s easy to update and can potentially show many more images. I bought a Meural Canvas digital frame a few months back to try it out and compare it to other options. The Meural Canvas is a 27″ 1080p LCD display wrapped in an attractive wooden frame and matte. There is a film applied to the LCD panel that improves the display. In daylight conditions, it doesn’t look like an LCD display and most people would be fooled into thinking it’s an ordinary framed image.
Meural devices have an onboard controller that connects to WIFI, so there would be no need to connect to an external display controller. They are ready-to-mount, so would require minimal hardware or wall preparation.
The Merual Canvas looked good, so we decided to make a Gallery Wall with 6 Meural Canvases.
The biggest challenge was getting power to the devices. The Meural Canvas ships with a cloth power cord and large DC transformer. I didn’t want to dangle 6 cords to the floor and have a pile of transformers.
Options I considered:
Then I found another option: Ghost Wire is flat low-voltage wire that adheres to the surface of your wall and can be finished to blend in seamlessly. They offer a 2-channel 16-gauge product that’s about 2″ wide and a little thicker than masking tape. Will it work?
A Meural Canvas runs on 12V. I measured the current consumed by a single Meural Canvas. Typical was ~450mA. Peak was 1600mA (at maximum brightness). The 16 AWG Ghost Wire product is rated up to 10A. My maximum run length is less than 7ft. If I run three devices per channel (typical 1.35A, peak 4.8A), we’ll have a maximum voltage drop of about 1% and typically 0.33%. This should work.
I opted for 2 parallel channels with 3 frames each. I used a single 200w switching DC transformer to power.
Next, I mounted the devices to the wall with the provided cleats from Meural and I had two problems:
I solved these issues by mounting the cleat to a 3/4″ plywood standoff. I made the standoffs 20″ wide and attached with 5 drywall anchors each. The additional width and rigidity made it easy to level and keep flush. One of the screws in each cleat is in a wall stud.
What I like about the Meural Canvas for a multi-display gallery wall:
What I didn’t like:
Idea: My cat has a “bristle-bot” style toy, but it’s not a favorite. He’ll watch it as it wanders randomly around the floor, but he doesn’t really engage with a toy unless it “hides” — goes behind other objects so he can strategize about where it’s going to show up next.
The style of motion in these robots is kind of neat. There are no wheels. Instead, they operate with vibration. There’s something insect-like about the movement.
Can I make a bristle-bot toy that I can control with my phone so my cat and I can have fun together? Yes… Well, I made a toy. I didn’t succeed in engaging my cat.
I experimented with a couple of different designs. The parts list for this version includes:
The design also includes a 3d-printed mouse body and a custom PCB to keep everything compact.
The operating principle is that since vibration motors are mounted to the sides of the body, when a motor is engaged, the vibration will cause the legs on one side of the body to flex. If both motors operate at roughly the same frequency, engaging both motors simultaneously will move the mouse forward.
I designed parts in Rhino 3d with Grasshopper. I 3d printed parts with a Form Labs Form 2, using the standard Grey resin.
Chocolate making is messy business. One of the steps in nib-to-bar chocolate making is to extract the liquor from the nibs. We use a Champion Juicer.
Inevitably, when you add nibs to the chute … a lot of them fly back up the chute and land all over the kitchen.
So I built this hopper and plunger system to help. It has three parts:
To operate: Lower the plunger to cover the collar opening, add nibs, lift the plunger to open the collar opening and gravity-feed some nibs, lower the plunger when the chute is part-way full. No nibs should escape.
I printed the parts on a Form Labs Form 2 SLA printer. Form Labs doesn’t make a food safe resin (though they do make dental-grade resins). In fact, I’m not aware of any food-grade resins or filaments for 3d-printing. This is a topic the Internets have a lot of opinions about. I chose to coat parts in many layers of poly-urethane, which is food-safe when fully cured.
I use a Handibot for CNC woodwork. If you aren’t familiar, Handibot is a portable CNC router that you place on top of your workpiece to make pre-programmed cuts. It can cut the same designs and perform most of the same tasks as a full sized CNC machine, but compared to full-size CNC machines, it’s compact — small enough to pick up and move around. Since the machine sits on top of a workpiece, the size of project it can tackle is virtually unlimited. The downside of the compact design is that it only cuts 6″x8″ at one time before the operator must physically lift and reposition the machine for the next “tile” of a cut. So for a large project on a 4’x8’ sheet, you may reposition and register the Handibot 96 times.
Today, when I’m performing multi-part cuts, I use a custom rigid jig to register the position for each cut. These jigs aren’t bad, but they require building a new tool, possibly with access to different machinery, need additional workspace, and physically limit the size of my projects.
Idea: Optical Registration
I’m going to try an experiment. Can I improve my experience with the Handibot by performing multi-part cuts using computer vision for registration? In short, can I use computer vision to capture a full view of my workpiece (ie: “scan” the workpiece), then identify the target and current position of the machine on the work surface, for each cut, with high precision (+/- 0.01”)? And will precise computer vision-based registration make for a better user experience?
Full disclosure: I chose this project as an opportunity to learn about computer vision and get experience with OpenCV. My goal was not to compare and find an ideal registration method, so I did not consider alternatives. That said, I don’t believe my precision requirements could be met with alternatives including LIDAR, light-based proximity sensors (eg. the popular Sharp proximity sensors), or ultrasonic sensors. If you disagree, and can point me to a good alternative to a camera-based solution, please let me know.
You can use and modify Handicam yourself, and I’ll provide more details on GitHub. For this post, I’m jumping straight to findings.
Good news: We can create an optical registration solution for the Handibot with better than 1/100” precision. Doing so requires some custom parts, including a camera mount and Aruco marker board. It also requires a good camera with control over focus and exposure and reasonably good lighting. The code on GitHub is available as a proof of concept.
Now the bad news: Using Handicam requires manually placing the machine for each tile. Accurately maneuvering the Handibot, which weighs about 50lbs and has an anti-skid rubber bottom, is difficult and cumbersome. So, for example, while Handicam can provide feedback that the Handibot needs to be moved 1/32” on the X-axis, lifting and moving the machine that distance by hand, within a tolerance of 1/100”, takes a lot of frustrating trial and error. In my experience, I could do it, but it took minutes to adjust the position for each tile.
With access to Handicam, I still prefer a rigid jig. A jig gives me reasonable precision and significantly less effort per tile. That said, there are some future work areas that could greatly improve the solution and make it viable:
On demand single-tile CAM, based on the Handibot’s current position: Rather than pre-program a grid of “tiles” for a complete multi-part cut then force the user to accurately position the machine for each tile, generate the cut instructions for any position that the user places the machine, on demand. This way, the user can roughly move the machine to where a cut is needed, without fussing about accurate placement, but still get a precisely aligned cut according to the machine’s measured position on the workpiece.
Autonomous movement for the Handibot: Turn the Handibot into a mobile robot, using wheels, tracks, or belts for accurate repositioning. Since I started work on Handicam, I’ve learned about some new options for CNC routing with autonomous motion. Considering these new options, I think that the Handibot and its tile-based approach for large cuts may still offer some advantages: The weight and anti-skid features that make for tedious manual repositioning between cuts are actually a virtue at cut time, because they ensure the machine stays precisely aligned in spite of large forces on the router that could move the whole machine. Further, the tile-at-a-time approach enables the user to choose to add additional work-holding when needed.
I don’t currently have any concrete plans to pursue these ideas, but if I a tinker more, I’ll try to share. Thanks!
Here’s a simple vibration table for making concrete tiles, made with a Black and Decker Mouse, some screws, nuts, washers, and compression springs, and scrap wood.
The springs I used are 1 3/4″, 1/2″ OD, .054WG, but any reasonable gauge should work. Tighten the screws to create a little compression. Use a pair of nuts on each screw to keep the nut from backing off under vibration.
I mix concrete to a “wet sand” texture and trowel into molds. The table helps create an even distribution with no voids and levels out the open surface of the mold.
Mecanum wheels are cool. Each wheel is composed of a series of rollers, pitched at 45 or -45 degrees. When moving forwards and backwards, the rollers do not engage, but when the front and back wheels rotate in opposite directions, the rollers engage to move the vehicle left or right.
These wheels are 3d printed, except for machine screws and bearings.
The vehicle in the video was as basic as I could make it. Electronics are an ESP8266 for control and 2 dual H bridge motor controllers. The ESP8266 has 9 usable I/O pins, just enough to control 4 motors if they share a PWM pin.
The 2nd ESP8266 you might see in the video is running ESPLink and acting like a WIFI serial port for remote code updates.
Some learnings from building:
Some learnings about Mecanum wheels:
I wanted to build a storage bench for this area in my attic workspace. There were a couple of challenges: This house is more than a hundred years old, and nothing is square. The window sill is +1/2″ higher left-to-right. There’s a vent in the wall on the left side. The windows are casement-style and open to the inside.
To deal with the vent issues, I opted to work ventilation into the front of the bench. These are removable panels, so the design can change in the future. The window lid is sized so that it can open whether the windows are opened or closed. The left section can be closed off if I later decide to isolate the vent.
Medium: Walnut. I purchased this nice free-standing wardrobe and wanted to place it in dead space in the entry hallway. Unfortunately, this placed it right on top of the cold air return vent for our furnace. I made a skirt to elevate it and match the height of the baseboards, then fabricated this vent screen, based on a photo my wife took of an architectural element in Jaipur.
The Xaar 128 is a piezoelectric inkjet printhead used in large format vinyl sign-making. It *might* be useful in 3d printing, conductive ink, or masking applications.
Why piezo? TLDR: Most inkjet printheads are “thermal”: They work by superheating a fraction of the ink in a chamber, turning it into gas, which expands to force the remainder of the ink out of a nozzle. Superheating limits the range of materials that can be used in these printheads. Piezoelectric printheads are less common, and since they use a mechanical operation to force fluid out of a nozzle, they don’t have to modify the state of the fluid to operate, and can work with a broader range of materials.