Wednesday, August 25, 2021

Between a tetrahedron and a sphere

What's in between a tetrahedron and a sphere? You could smooth the tetrahedron partially but this no longer has any flat sides, or you could go to higher solids like an octahedron then an icosahedron, but these still have sharp edges, unlike a sphere.

A polyhedron has C0 continuity and a sphere has Cinfinity continuity. It makes sense that an intermediate could have C1 continuity, which is what this shape has:

We start with a tetrahedron:

then slice off the corners and the edges:
and repeat:
the limit as the iterations goes up is:

I am just showing the edges, but this is a convex solid with flat surfaces in all the circles. In this one the corners are cut off so the slices touch, and the edges cut off one third of the way along the sliced edges. The result has octahedral symmetry, with 8 largest circle faces. 

It is C1 continuous because its face angles (the first derivative) follow a devil's staircase function across an edge, this is a continuous function. 

Here's the solid object:

and here's the same thing but starting with a cube:

Here is the Octahedral rendered with transparency and refraction:




Friday, April 2, 2021

Mean Lumps

So what does a mean shape look like overall? I'm talking about any sort of blob-like shape. A rock, a piece of dough, a pebble, a chunk of ice in saturn's ring. 

This parameter space is pretty hard to pin down, and each of the above have different processes that form them. But one common thing is that they are fairly smooth, and they have a continuous, simple surface. We aren't talking about sea urchins or sponges or tree shapes. 

Regardless of the exact parameter space, the average shape is constrained by the equivalence class, and it is due to this that we can get a rough shape, without knowing the actual probability density of the parameter space. 

In this post I make an attempt at an average shape using an octahedron with randomly selected radial distances of the six vertices, using a random distribution of 'energies' (the square of the distance) as used in previous posts. In fact we already have the result, it is from the dice numbers in the mean textures post. The dice values for the sides 1,2,3,4,5,6 are: 3.35, 2.57, 2.86, 2.86, 4.58, 5.98, which translating to radial distances gives this shape:

The left side is the octahedron from three angles, and the right is a smoothed version to give an idea of the blob shape. As you can see, it has bilateral symmetry, but is elongated and somewhat egg-shaped in the two other axes.

We can do the same thing with icosahedra. The result is:


vertices: x: 1,-1, 0, ϕ, ϕ, 0,-ϕ, 0, 1, 0,-ϕ,-1
               y: 0, 0, ϕ, 1,-1,-ϕ, 1, ϕ, 0,-ϕ,-1, 0
               z: ϕ, ϕ, 1, 0, 0, 1, 0,-1,-ϕ,-1, 0,-ϕ        (ϕ is the golden ratio)

lengths: 1.03,0.91,0.91,0.92,0.92,0.90,0.93,1.65,0.93,0.93,0.93,1.04 


The result is a bit different, but the main features are similar to the octahedron case: It has bilateral symmetry on one axis, and an egg-shaped elongation on the two other axes, the proportions of these two elongations are similar with the octahedral case, the main difference being that the icosahedral blob is fatter (bottom row).

Nevertheless, these two blobs are quite similar looking, and remind me of an almond a bit.






Friday, January 1, 2021

Mean particles

The results of the previous post on mean points shows that average sets of points tend to not be next to each other. If we introduce time into the system, then this can be seen as a repulsion of the points. So let's try this out. 

Looking at just points in a 1+1D world, where the spatial dimension is in a circle. We can evolve a set of points as follows:

  1. Pick any set of starting points, these are also the current mean point set.
  2. Create a large population of sets, each set is equal to this starting set. 
  3. For each time step, add a random offset to each point set, and rotate it around the circle to be as close as possible to the mean set. Accumulate the deltas (relative angle of these rotated points to the mean points).
  4. add the total delta onto the mean point set. (i.e. generate a new mean set)
  5. go to 3.

 The results are shown with time going from bottom to top, with starting positions 1,1.1,7,8  (angles out of 10 ticks around the circle):

Notice that points appear to repel each other, and the two points closest to each other repel the quickest.

The size of the random offset applied at each time frame affects how fast the particles communicate and 'see each other', with no repelling until the random dispersion of each point starts to interfere with its neighbour.

Since we use a large population of random sets, it basically represents a uniform dispersion of the sets over time, with a chosen dispersion rate. The dispersion rate does not have any effect other than changing the time scale on the resulting trajectories. So the results are effectively parameter-free. They are simply the mean trajectories of a set of n points, given the circle's rotational symmetry, and a maximum communication speed. As such, the trajectories are otherwise entirely defined by the initial position of the points.

The behaviour isn't particularly interesting though, but we can make it more so by going from first order to second order dynamics. This is done by adding the change in the mean points on to all the candidates at each time step, reflecting the nature of particles to maintain their velocity. The result has a little more oscillation:

This is basically degeneracy pressure in action, just without complex numbers. However, there is an additional behaviour in this system: unlike degeneracy pressure in absolute coordinates, which converges (if it loses energy over time) to equidistant points, with circular symmetry included the points stabilise to the non-uniform dispersal seen in the previous post. At least it would if the communication distance was wide enough.

This makes it a system that has repulsion and a sort of attraction (as the converged set of points cluster on one side of the circle). This is not visible on the above graph due to the communication distance being lower, but it would be interesting to investigate this behaviour further.  

I expect the 2D version (on a sphere) to show even more unusual behaviour, it could well display orbiting behaviour around the lowest energy point locations (seen in the previous post).

Also note that the set of particles conserves momentum. At least, it appears to.

Monday, December 7, 2020

Mean points

 We can also look for a mean set of points. This has continuous rather than discrete symmetries. The starting point is the 1D case: what is the mean set of repeating point locations? It depends on how many points are in the sequence, so here is the answer for the first 10 numbers of points:

The images on the left represent the sequence as the points in a circle (rendered as radial spokes). 

As you can see, the result is fairly trivial, the sequence seems to be symmetric, with a single largest gap, and progressively smaller gaps on either side of it. 

If we include reflection of the sequence in the equivalence class, the results appear to be exactly the same.

2D

We could explore the 2D case in a toroidal universe as with the previous mean textures post, but this doesn't easily allow a continuous rotational symmetry. Instead, let's look at a spherical universe, so find the mean set of n random points on a sphere. 

To do this, I need a way to find the least squares orientation between two sets of points on a sphere. To avoid the uncertainties of approximate methods (such as Iterative Closest Point etc) I compare all permutations of the points and run an absolute orientation function on each permutation (Eigen::umeyama). The R which has the smallest sum of squares difference is chosen, and applied to one set of points before averaging them and renormalising.

Unfortunately, this permutations approach is intractable for larger numbers of points, so I show here only up to 5 points. Giving just a hint of the pattern for mean points on a sphere.

2 points:  (right angle)

0.437025 0.490747 0.753775,   0.67137 0.379744 -0.63644

3 points: 

 
0.147701 -0.525988 -0.837568,   0.981211 -0.192266   0.01611,   0.104602 -0.529166  0.842047

4 points:

-0.394284 -0.910857 -0.121984,  -0.680957  0.712358 -0.169835,   -0.983752 -0.0842168   0.158555,   0.184711  0.622359 -0.760626

5 points:

-0.295731  0.738672 -0.605728,   0.690733 0.0875932 -0.717785,   -0.97845 0.0669067 -0.195341,  -0.301121 -0.657821 -0.690361,  -0.651082 -0.174766  0.738613

6 points:

0.129365 -0.627223  0.768021,  -0.434599 -0.871754 -0.226204,   0.367076 -0.768696 -0.523795,  -0.841728 -0.113547  0.527827,   0.769293  0.153797 -0.620109,  -0.778213   0.13924  -0.61237
 


These at least show that the mean set of points on a sphere does not follow a trivial pattern. The points aren't in a plane or even within a radial cone.


Wednesday, December 2, 2020

Mean textures

We can extend the idea of mean signals into 2D, as mean textures. In the previous post I found two signed and two complex signals that were non-trivial (they didn't tend to a spike signal). Here I look at the case for 2D, on signed textures (greyscale with 127=0). 

Here are the results with no 90 degree rotational symmetry. Showing a 5x5 tiled patch from a 2x2 texture (left) up to a 5x5 texture (right):

If I include reflection when matching:


If I include 90 degree rotational symmetry (no reflection):

And with reflection included:

Finally, for comparison, here is an unsigned texture, with 90 degree rotational symmetry, no reflection:


The unsigned texture is just like the unsigned signal of the previous post. The mean is a single spike. For the signed textures, they all appear to tend towards a pair of extremes (one positive, one negative) as the width goes up to 5.

One reasonable variant is for the random values to be a uniform distribution of signal energies (value squared). This supports the need for negative values, since we are taking the square root of the uniformly sampled numbers.

Here is with no reflection: (28 iterations)

and with reflection (28 iterations):

These do not show the same tendency towards a trivial case of one or two spikes at larger resolutions. However, the accuracy of the 5x5 is very poor, due to the huge number of iterations needed to converge. The rest are quite close, as verified by trialing with different random seeds.

Here's the above 4x4 (no reflection) shown over a larger surface:

We also do something very similar in the complex domain. Here I use the red component for real values, the green for imaginary values and set the blue to halfway. Lower iteration count (23) just to give an indication of what it might look like:

without reflection:

with reflection: 


Mean signals

Here I look at the average of discrete repeating signals. That is, I pick random signals (by sorting a random set of values between 0 and the signal length, and using the differences). Then I match the signal to the running average using a cross-correlation, by sliding in time, and optionally reversing the signal direction.

Here are the sequences visualised, from length 1 to 10. One grid cell is 2 units high.


no reflection                                        reflection

We can also include negative numbers. Each candidate sequence has a sum of absolute values equal to the sequence length. A cross-correlation works equally well, but we can also choose whether to allow negation of the values when matching. Here grid cells are 8 units.

1,3: no reflection, 2,4: reflection. 3,4 includes negation when matching.

A different approach is to pick random values between 0 and 1, and match based on minimising sum of square error, rather than maximising cross correlation:

left side: unsigned, right side signed. 1,3 no reflection, 2,4 allows reflection

Notice that the results get smaller for larger sequences. This isn't true for the cross-correlation, and I find the cross-correlation method more general as it represents scale-independent matching.


A natural extension of the cross-correlation on signed numbers is to use complex numbers. We cross correlate as the sum of one times the conjugate of the other, and pick the match with the largest real component.

 
left: no reflection                                    right: with reflection of signal index

There are more reflection options for complex numbers. We could also negate the value in the reflected match (below left), or conjugate the complex value in the reflected match (below right): 

 

Of the four, it looks like the first two (no-reflect and reflect) give the least trivial result.

We can also use the largest absolute value in the cross-correlation, and so 'rotate' the complex signal when adding it on. This is adding an explicit phase invariance. However, these give more trivial results:

left: no reflection, middle: reflection included, right: reflection with conjugation.

It is not clear that having an even distribution of absolute values is the best constraint on signed and complex values. It may make sense to have an even distribution of signal energy (value squared). Doing so gives results with slightly less spiking and clearer patterns:


left: no reflection, right: reflection allowed.

I have only shown signal lengths 1 up to 8, as the larger ones become too inaccurate, but the above are repeatable. The no-reflection case, shows a hint of a pattern, with the n=2, 4 and 8 cases close to square waves. and the n=3 and 6 cases being uneven square waves. As for the reflection case, it shows signs of following a saw-tooth wave until about n=6.

Monday, November 30, 2020

Mean polyhedrons

The previous post was a method to find the average polygons of a certain number of sides, assuming that one defines a polygon by its interior angles and edge lengths. 

If we just define shapes by their edge length then we can obtain average triangles and also average tetrahedrons. Without loss of generality, we pick randomly distributed edge lengths which sum to 1.

For triangle edge lengths a,b,c, we get the means (shown over 4 independent trials): 

a,b,c: 0.4, 0.2, 0.4

An isosceles triangle.

when we include reflections, we get an obtuse triangle:

a,b,c: 0.444, 0.361, 0.194


For tetrahedrons we pick 6 edge lengths, and throw out any invalid tetrahedrons. Those being any with faces that disobey the triangle inequality (sum of shortest sides must be greater than longest side) or disobey the tetrahedron inequality (sum of smallest face areas must be greater than the largest face area). 

With these remaining tetrahedra, we find the closest match (dot product of the vector of edge lengths) for all relative orientations. For tetrahedra that is 12 orientations. We convert the mean edge lengths into coordinates like so:

Under this system, the mean tetrahedron is fairly unstable, but seems to look a bit like this:

a,b,c,d,e,f: 0.157258, 0.184638, 0.178683, 0.143683, 0.16418, 0.171558
r
,s,t,u,v:     0.115537, 0.144023, 0.113931, 0.035557, 0.123238

coords: (0,0,0), (0.179,0,0), (0.116, 0.144,0), (0.114, 0.0356, 0.123)

If we allow reflection when matching shapes, then the result look more like this:

a,b,c,d,e,f: 0.195577, 0.184195, 0.126073, 0.159834, 0.193624, 0.140698
r
,s,t,u,v: 0.0458932, 0.178386, 0.0402287, 0.0351516, 0.130161
coords
: (0,0,0), (0.126,0,0), (0.0459, 0.178,0), (0.0402, 0.0352, 0.130)     

Here it is top down: 


Cuboids

For normal right-angled cuboids, we know from the mean signals post that the mean edge lengths must be proportional to 2, 5 and 11:


This is because the cuboid has no further constraints on its edge length, and its symmetries are the same as a length 3 sequence with reflection allowed.