27
Augmented Programming with GitHub's Copilot
You may or may not have heard of the new GitHub project called Copilot https://copilot.github.com/.
It aims to use AI to augment the abilities of a human coder and is hence associated with the catch phrase "Your AI pair programmer". It appears that it is currently in technical preview and you need to sign up to participate in the preview.
I haven't actually tried it yet but I can definitely see the promise in augmenting the abilities of coders. The potential for speeding up coding time would be extremely valuable. I however have a few concerns about this technology.
One problem I have is the ability to inject code that may not even compile! If you are lucky enough to be working with a compiled language, then that may not seem like such a a big deal. Now consider the problem when applied to dynamic languages such as JavaScript. Code can be injected that the human coder may not even understand (which I call the Ignorant-StackOverflow-Coder-Use-Case). To ensure correctness of the code, the human coder needs to perform extensive testing (hopefully in automated tests). But how can a human coder implement tests for code they may not even understand?
Another problem I have is traceability. How can a code reviewer see the extent of Copilot written code versus human code? This could be overcome with tight use of a version control system to some degree (e.g. special commits of internal libraries encapsulating injected code) but this would create annoying overhead. Perhaps Copilot can provide a way for a code reviewer to highlight the augmented code but like I said I haven't previewed the technology yet.
Related to traceability, one of my concerns is the ability for Copilot to systematically inject bugs. If a bug is found with code the machine learning algorithm has been trained with, how can the bug be traced? Identifying the training fragment is probably very difficult given my limited experience of machine learning and may involve many code fragments. So I don't know how feasible it would be to solve this problem.
The traceability problem actually works both ways - how can a bug found in injected code at testing time be flagged back into to the machine learning algorithm? This feedback loop could expose the training and testing of the machine learning pipeline to adversarial attacks so again I am not sure of the feasibility of solving this problem. https://en.wikipedia.org/wiki/Adversarial_machine_learning
Even worse and related to my concern about bug injection is the ability for it to pervasively inject security vulnerabilities into code. It would be relatively easy for a bad actor that knows about a zero day to search for matching code and exploit other code bases.
Ideally I would like to see features that allow a code reviewer to trace all Copilot injected code and diff it with significant training source fragments. I would also want the ability to show me the bug history of code related to the injected code and any existing automated test code. I think broadly speaking, we need to use the same criteria for injected code that we use for 3rd party libraries. We carefully evaluate a third-party library for license compatibility, reputation of the contributors, depth of community support, and code coverage of automated tests, and I think the same should apply to augmented code.
So I guess you are getting the feeling that I have my reservations about this technology but I will save my judgement until I actually use it! It could be the new StackOverflow. Now don't get me wrong - StackOverflow revolutionised they way we code but it is a double edged sword that can only reap benefits when it is used in an informed manner.
27