What I cam away with is that camera sensors see light very differently than humans. So much so that we must manipulate how the camera captures and/or outputs the light or resulting image so as to make it better for humans to see.
All camera sensors see light in Linear space.
This Linear Light needs to be remapped for a number of reasons, mainly to do with better use of the available bit depth.
A linear image needs a lot more bits to maintain the same level of image granularity.
See the Light Illusion white papers - DI Guide, Scene-to-Screen - where this is explained in more detail.
Gamma curves are basically designed to compensate for low bit depth and low dynamic range. Gamma gives use more information where we want it most: in the midtones.
Nope, gamma curves are to enable a TV to show an image correctly. This is based on old CRT technology and video cameras that were pre-CCD sensors. Using Gamma Curves for other reasons is a later development, and aims to help get the most from a camera, but means the images will need grading before delivery.
As for Log, it's designed to redistribute light evenly so as to maximize the areas most important to human vision, such as midtones. Log actually adds dynamic range, which gives us more latitude to work with in post.
Nope - again the Light Illusion white papers are the best to read. LOG is a form of visually lossless compression to enable the use of less bits for a given scene dynamic range. LOG doesn't increase Dyncmic Range - nothing can, as that is fixed by the camera sensor.
So both gamma curves and log encoding are ways to get more out of our cameras today, but they're not necessary for high end cameras such as an Alexa, Red or F65 because they already have high dynamic range, high bit depth, high resolution and little to no compression.
Nope - See above. And many top end cameras do indeed use Gamma Curves and/or LOG Encoding...