
Software 2.0
Kotlin is paving the way for software 2.0 to empower modern developers. Facebook is continuously trying to improve and build advanced tools for machine learning (ML) programming with the help of differentiable programming (a special programming paradigm to differentiate numeric computer programs via automatic differentiation). Programs optimize themselves to become more efficient with the help of differential programming.
Just to advance the features of machine learning programming, Facebook is now struggling to create a powerful and next-gen system for the tensor typing feature. Not only that but also trying to declare differentiability a most elite-class feature of their beloved Kotlin language. This step from Facebook differentiable programming languages team will provide an amazing opportunity for machine learning developers and scientists to explore more about Software 2.0, a type of software that codes itself through:
- Seamless differentiation through control flow, data structures, and primitives
- A well-functioning and efficient library providing machine learning APIs and a Tensor class
- Compile-time errors for tensor shapes and differentiable functions
- Tensor typing for checking, compile-time shape inference, and static
So, machine learning developers and data scientists will easily build sophisticated, extensible, and efficient programs with the addition of next-gen features such as differentiable programming and intuition in Kotlin. That will help them in the debugging process.
Why Differentiable Programming Should be in Kotlin?
Compatibility issues in codes are the main reason to bring differentiable programming in Kotlin. So, compatibility problems among software 1.0 and software 2.0 are the reasons to achieve more success in software 2.0.
Cartpole reinforcement learning model will help us to understand why differentiable programming is so important in paving the way for software 2.0 with Kotlin. The model we are talking about starts its action having no prior knowledge but keeps the physics laws to maintain a pole on a cart. It follows a trial and error paradigm to balance a pole on a cart.
Surprisingly, there are plenty of attractive physical simulators in the market which are designed on the traditional programming paradigm. What if we incorporate those existing physics simulators into this one to make it more efficient? Will it do better?
Here differentiable programming comes to sort out this matter. We can add an arbitrary user (or library) code into more extensive and next-gen models.
So, those parameterized programs that aren’t designed or programmed on machine learning ML libraries could be automatically optimized by leveraging gradients. So, that’s how differentiable programming helps developers to do well in machine learning.
Automatic differentiation (AD)
Automatic differentiation preserve program structures like function calls and control flow. Not only that but also enable compiler optimizations that are impossible with AD at runtime.
Tensor Typing
It’s a sort of multidimensional array and is an abstraction of matrices and vectors. Data scientists and machine learning developers easily gain compile-time shape inference and checking. Without this feature, developers see a lot of bugs that are too difficult to debug.
It does not provide only easy debugging but also helps developers to do good code documentation with proper clarity. Annotations could be used to record expected and acceptable tensor inputs. Developers could use generics and type aliases to enhance the comprehensibility of code to the next level.