With the way rendering is going (see alan wake 2 / nvidia's ray reconstruction), it will not take long, maybe a decade, before all rendering is done entirely by AI.
What does this even mean? Each frame will be associated with some declaration of the contents of the scene and AI will just estimate the render without involving a physically-based rendering pipeline?
Although I do think shading languages are still more appropriate, or other kind of more high level modelling.
In any case as long there is a toolchain to generate SPIR-V or PTX, there is a way for the favorite language into the GPU.