Please read http://madebyoll.in/posts/world_emulation_via_neural_network.
You can also find more experimental neural worlds at https://neuralworlds.net.
It's an iOS app used for recording video + motion data (nwraw bundles) that can be converted into neural worlds.
Once you have the NWCapture app installed:
- Collect the recording
- Find an interesting or beautiful moment / place
- Before recording, move your phone around to start tracking, then hold it still to make sure tracking is stable before starting to record.
- Record a single area densely (walking / looking around for several minutes with a variety of camera motions & viewpoints) rather than just walking in a straight line or standing in one place. Worlds work best when the training data has dense coverage of diverse actions within a limited scene.
- Keep your camera around head/neck-level (so the video viewpoint is similar to your eyes' viewpoint). Worlds work best when the training controls behave similarly to the first-person mouse+WASD controls used at inference time.
- Record around 5-15 minutes of video (10 minutes is good). More video coverage will generally yield less-glitchy worlds.
- Press the share arrow to export the nwraw. For conversion, you'll want to send/upload:
- The nwraw bundle. Links to Google Drive, iCloud Drive, etc. should work. Bonus points if you zip/tgz it first (TODO: app should do this eventually)
- (Optional) a short description for the neuralworlds.net page.
My yield of somewhat-playable worlds is still quite low even when following these guidelines 🥲 but so far these have had the best odds