How do I deal with scale of the universe when generating a simulated universe?

How do I deal with scale of the universe when generating a simulated universe?

```

Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.

I have a dataset of 100,000 actual stars. I wish to create a game where I can fly between the stars and fly up to them and click on them etc. How do I go about dealing with the problem of the scaling of the universe, even If I were to allow fast as light travel it would be too vast. Should I cluster my data and then implement a subtle sort of scaling to move around? Sorry for the broad question but I'm rather stuck at the moment.
```

The only real problem you’ll be fighting is floating point precision; only 6 digits. When you accumulate floating point error over astronomical distances, you get astronomical inaccuracy. This causes distant bodies to rapidly alternate between two or more places from frame to frame.

It’s likely you’ll have two coordinate systems and, in most cases, you’ll need to avoid multiplying to move from the least-precise system to the greater.

USpace = universal scale – ok for fixed star coordinates; precision will be lost once during loading
CSpace = “in-system” scale – arbitrary

NO:
CSpacePosition = USpacePosition * ConversionFactor

YES:
USpacePosition = CSpacePosition / ConversionFactor

In other words, don’t move your camera around the universe by light-years at a time and attempt to render the changes at a larger scale.

A large change in CSpace is a small, accurate, change in USpace:

Re: comment:
The ViewMatrix is actually just the camera’s WorldMatrix, inverted. That means that whichever shader you are currently applying the ViewMatrix in, is already “un-moving” the objects (on the GPU). This is more efficient when you are rendering 1000’s of objects with 1000’s of WorldMatrices. For a relatively few number of high-detail models, you can afford to “un-apply” the camera’s relative movement directly to the objects’ WorldMatrix on the CPU, instead of applying it to the ViewMatrix, to be applied later, on the GPU. I’m not sure if there’s a name for the technique, but you’re really just storing/updating the high-detail models, pre-transformed in View/Camera-Space.

In the first pass, you’ll feed in pre-View-Transformed spheres and they’ll be rendered normally using whatever shaders are required (tess, glow, particle, etc.).

The second pass will feed the star coordinates loaded from your dataset directly to the shader (all 100k). This pass uses the same ViewMatrix with the (0,0) translation component replaced by the camera’s USpace-position. The starfield shader does apply the ViewMatrix on the GPU, so the full list only needs to be uploaded once. ViewSpace is centered on the camera at (0,0) so, after the ViewMatrix is applied by the shader, each vertex becomes a direction away from the camera. You can, then, scale each direction vector down so their lengths are just inside the view frustum.

Starfield extra credit:
I doubt a 100k pointlist is going to give your GPU much trouble but, you may be able to increase performance using the GeometryShader. After applying the View matrix, scaling to the farplane, and applying the Projection matrix, all visible stars will have coordinates in the range of -1 to 1, depth 0 to 1. Emit vertices that fall inside those ranges and skip the rest. That’s called GPU culling and should allow you to push your vertex count beyond rediculous.

Edit2: