Welcome to Dublin

So, in an effort to help keep myself sane over the next one and a bit weeks, I think I’m going to start blogging again, prior to the Masters course actually commencing.

My focus has been rotating between playing in Rhino & Grasshopper - revisiting the approach I took to the undergrad archive project, specifically the grid generation method; learning German (A1) through the Goethe Institute (something I have admittedly been struggling with); reading Cloud Atlas; binge watching The West Wing; and staving off boredom during this quarantine period by re-learning how to play chess.

Why revisit the UG project? There were a couple of things left to try out with respect to the creation of the form; from a computational standpoint. I felt that it was also a good way for me to gently get my head back into using Grasshopper, and thinking in Grasshopper logic, as I haven’t really had the chance to play with it during my year out/in industry.

The purpose of this task was to re-envision a way to quickly generate multiple different scales of three dimensional grid with scaffolds, and then use either custom “moulds” for boolean operations to create shapes out of these scaffolds, whether they be voids or solids. The next goal would be to create a new system based upon a replicating unit, whether itself designed through Grasshopper or, again, manually generated; and then this replicating unit multiplied such that it is bounded by these voids or shapes.

Looking back to the 2nd year project, I remember an issue that came up was I had changed the scale of the scaffold from I think it was 400mm2 centres to 600mm2 centres, but by this point it would have taken me a week (or more) to rebuild the model I was using for visualisations, a week I did not have at that point. This investigation started as a way to side-step any issues such as this. At this point it is admittedly presumptive that I would be taking this any further for MArch, but, much like German, it felt like a job left unfinished.

1st Iteration

For the first iteration, I set out three separate planes which were constrained by drawn boundary curves. The number of points were provided by sliders, and the origin was centred at 0,0,0. I designed this system so that the grid would automatically centre itself on the origin, rather than start at the origin and spread out. From these points, rectangles were generated such that they were on the correct plane (rather than all wrt. XY plane), again, with the point at its centre, rather than a corner. These rectangles were then extruded to provide the scaffold.

2nd Iteration

For the second iteration, I wanted to change the system so that I could run it from any origin, and any orientation. To do this, I specified a single “primary” vector, from which all the perpendicular planes would be generated. In addition, I added a level of granularity to the generation of the grid, splitting it so I could control the number & c/c spacing of “columns” and “beams” separately. The boundary curves in this instance were used to cull the points generated outside of these limits.

3rd Iteration - Part I

Plan view of generated grid

Plan view of generated grid

For the 3rd iteration, I built upon the generation via a “primary vector”, and instead of using three boundary curves (one for each plane), I simplified it to using a single site boundary curve on the XY plane. In this instance, I used the site boundary from the undergraduate archive project. Further, I rationalised the other variables at play - number/spacing/thickness (radius) for both columns & beams remained independent. Column height became a factor of the number of beams generated, and beam length became a factor of the number of columns generated. Overall, the grid still auto-centred on the world origin of 0,0,0, but an override is also provided should the origin require redefining.

Quirks of this approach include extrusions emanating from the each plane’s central axis, and thus uni-directional, unless “mirrored” or converted into a bi-directional extrusion. The grid/scaffold is constrained in all three dimensions via usage of “PointInCurve”, “Sift”, “PointInBrep”.

3rd Iteration - Part II

Plan view of “captured” columns

The next goal was to take closed BREPs - namely the digital “moulds” I had used for the archive project - and attempt to use those as boundaries, or voids for the generated grid. This met with mixed success. One of the main issues I’ve encountered with “ShapeInBrep”, is that, whilst it has 3 states of ‘inside/intersect/outside’; if you’re attempting to isolate BREPS that have been cut by the boundary in question, then they will still be touching it and this will be countered as intersecting rather than inside the cutting BREP. As a result, a workaround would need to be developed, or wholly different approach taken.

Screenshot showing my attempt at cleaning up the aftermath of ‘SplitBrepwithBrep’. At least I got to play with logic gates again.

A subsequent issue I encountered when using these digital moulds was the amount of tidying up that would need to occur to obtain “clean” cuts of the scaffold. The moulds I used are of a different structural language - as in the original project - than the scaffold, so they do not correspond completely. Please see the image on the right, as that is the best way I can demonstrate the problem.

This issue is what is pushing me to look into grid generation based upon a pre-defined unit (we’ll call it a voxel for now); which can be multiplied, or scaled, depending upon additional parameters. I envision this would negate the issues brought up here, as it would be a case of “how many whole units are constrained to this BREP mould”; as opposed to “cut BREP with mould BREP”. Much like how I already have the grid itself generating with constraints by curve or volume.

Generated via layering composite on the right, with a rough poured concrete texture, and then passing through Illustrator’s Image Tracing algorithm.

Layered photos taken with my phone, and then passing them through Illustrator’s Image Tracing algorithm.

The ultimate goal would be to take images such as those shown above, and use them to indicate the density of the ‘voxels’, in combination with boundaries set, to generate a three dimensional form, akin to a topography. The greyscale image on the left already offers itself rather well as a topographical map.