Using THREE.JS to render to SVG

I came across a blog post that demonstrates using THREE.js to create SVG images. Since that demo was done in CoffeeScript, it took me a while to understand it and build an equivalent JavaScript demo (and the source code).

The SVGRenderer is undocumented in the THREE.js website and it requires a few extra files that are not a part of the standard THREE.js distribution. This post will help you pull the necessary parts together.

My demo is loosly based on this great THREE.js tutorial. I modified it to show the WebGL output on the left-hand side and the SVG capture on the right-hand side. Clicking the arrow updates the SVG capture and shows the code for the SVG on the bottom of the page.

When you hover your mouse cursor over the right-hand side, the paths of the SVG will highlight in red. These correspond to triangles in the original THREE.js model.

three-to-svg

The nice thing about rendering THREE.js models as SVG is that the visible faces will become path elements in the DOM, allowing you to highlight them with a single style sheet rule:

path:hover {
   stroke:       red;
   stroke-width: 2px;
}

This rule tells the web browser to stroke all path elements in a solid red line whenever the user hovers the mouse cursor over them.

How it works:

The demo uses the undocumented SVGRenderer object from THREE.js. The SVGRenderer object depends on another object called Projector. Neither are part of the official THREE.js build, so I grabbed the two source files from the “examples/js/renderer” directory of the THREE.js distribution from GitHub and placed them in my “lib” directory.

When the user clicks the arrow, the SVG on the right side is updated using a new instance of the SVGRenderer object. Here is what the code looks like:

var svgRenderer = new THREE.SVGRenderer();
svgRenderer.setClearColor(0xffffff);
svgRenderer.setSize(width,height);
svgRenderer.setQuality('high');
svgRenderer.render(scene,camera);

The SVGRenderer will store the SVG data in the domElement attribute of itself. In the following code fragment, I insert it into a parent DIV and remove the width and height attributes from the svg element so that I can scale it with style sheet rules.

svgContainer.appendChild(svgRenderer.domElement);
svgRenderer.domElement.removeAttribute('width');
svgRenderer.domElement.removeAttribute('height');

The SVG source for the bottom panel comes from the svgContainer.innerHTML attribute (domElement.outerHTML would work too). I use a regular expression to break up the source into lines and then post it into the destination text field:

document.getElementById('source').value = svgContainer.innerHTML.replace(/<path/g, "\n<path");
Advertisements

Could textures be used in 3D Printing?

Real world objects have surface texture. In computer graphics, real world materials are simulated by combining bitmapped textures with facet information. Bump maps or displacement maps add fine details to a surface without increasing the polygon count.

cube

3D printing software today does not use textures at all: a model of brick wall, for example, requires many very small polygons to capture details.

One reason is that 3D slicers use STL files which cannot carry texture information. There are several other formats that do, however, so using another file format such as OBJ would solve this problem. A second reason may be that 3D printing happens layer by layer, making it difficult to perform a displacement for all surface orientations.

For vertical surfaces, it would be easy: the slicer would use texture values to displace the print head laterally, perpendicular to the wall contours. For horizontal surfaces, however, the displacement would be above or below the current layer, which would complicate things.

In future posts, I’ll continue to explore the possibility of using textures in 3D printing and will present some work I am doing towards that end.