Hi,

Discover the cutting-edge world of NeRFs Explained: Goodbye Photogrammetry? and see how it can revolutionize your projects!

Have you ever tried to understand Neural Radiance Fields (NeRFs)? 

Back in 2020, this concept was all over the place… and we tend to notice that every time a new 3D Reconstruction algorithm gets released, the Computer Vision community is on fire!

So, what are NeRFs?

In short, 3D Reconstruction with Deep Learning.

Two weeks ago, we welcomed Jeremy Cohen from Think Autonomous to tell us about Photogrammetry — the classical way to do 3D Reconstruction.

Today, we’re moving on to blog post 2, where Jeremy will introduce you to NeRFs. You will learn:

  • What is a “radiance field”, and why did this approach become popular in 3D Reconstruction?

  • The exact “3-block” process from taking pictures to getting 3D scenes, and the secrets behind the Deep Neural Network used.

  • Why NeRF input images aren’t 2D, 3D, or 4D, but 5D! (hint: NeRFs are more than just going from images to 3D; they actually go from camera to 3D, which means viewpoints have utmost importance).

  • The “ray marching” technique that initiated the 3D Reconstruction revolution, and why it’s also what could kill NeRFs.

  • The commented math formula behind volumetric rendering.

  • The 2 algorithms derived from NeRFs that get the same 3D Reconstruction quality from 10x faster to… 2548x faster!

  • And many more…

Here is the link for the blog post: https://pyimg.co/sbhu7