GitHub’s leap into the future: How AI is reshaping “Shift Left” in development for enhanced cyber defense

In an exclusive interview, Jacob DePriest, Deputy Chief Security Officer, GitHub, highlights the groundbreaking use of artificial intelligence in redefining the concept of “shift left” in software development, and shares how GitHub is at the forefront of incorporating AI into every step of the development process, ensuring security is not an afterthought but an integral part of code creation.

How does GitHub use AI to redefine “shift left” in development, and what features contribute to this shift?

“Shift left” has typically meant getting security feedback after you’ve brought your idea to code, but before deploying it to production. However, AI takes it further and starts to make the true promise of “shift left” a reality. When used effectively, AI can transform security by proactively preventing vulnerabilities from being written in code. It provides context for potential vulnerabilities and recommends secure code from the outset (though we always still suggest testing AI-generated code). By empowering developers to write more secure code in real-time, AI can help build security in, right where developers code, and not bolt it on at the end. This marks a new era in cyber defence with generative AI at the forefront.

At GitHub, we’re already using AI to redefine what it means to “shift left” and revolutionise how developers build secure applications from the get-go. GitHub Copilot applies an LLM-based vulnerability prevention system that blocks insecure coding patterns in real time to make GitHub Copilot’s suggestions more secure. Our model targets the most common vulnerable coding patterns, including hardcoded credentials, SQL injections, and path injections, and we continue to develop and improve on this capability. GitHub Copilot Chat can also help identify security vulnerabilities in the IDE, explain the mechanics of a vulnerability with its natural language capabilities, and suggest a specific fix for the highlighted code. We’ve also infused GitHub Advanced Security with new AI-powered application security testing features designed to help better detect and remediate vulnerabilities and secrets in a developer’s code and suggest fixes directly in the developers workflow.

What are the key highlights of GitHub’s latest security announcements, showcasing its commitment to staying ahead in cybersecurity?

As mentioned, we’re infusing GitHub Advanced Security with new AI-powered application security testing features designed to accelerate the detection and remediation of vulnerabilities and secrets in a developer’s code. GitHub’s code scanning autofix feature suggests AI-generated fixes using CodeQL directly in pull requests, assisting developers in resolving issues faster, and reducing the introduction of new vulnerabilities into codebases. These are not just any fixes, but precise, actionable suggestions that allow developers to quickly understand what the vulnerability is and how to remediate it. Developers can instantly commit these fixes to their code, helping them resolve issues faster and preventing new vulnerabilities from creeping into their codebases.

We’ve also introduced the power of AI to detect generic or unstructured secrets in a developer’s code. Passwords are a common root cause of malicious attacks and they are hard to detect through traditional scanning solutions because they’re unstructured by default. GitHub’s secret scanning feature now leverages AI to detect passwords, making it easier to identify potential leaks in code with low false positive rates. Combined, these new detection capabilities help developers prevent and detect more security vulnerabilities across an organisation, keeping code safe for the future.

In what ways does GitHub integrate AI into every step of the development process for developers, ensuring security is built in?

GitHub has been at the forefront of bringing AI to individuals, teams, and enterprises alike, and security has continued to be core to how we think about every new product and feature developed. Now, by bringing the power of AI to GitHub Advanced Security, we’re enabling developers with AI at every step of the development process as they’re bringing their ideas to code. As we continue to innovate and infuse AI into the developer workflow, our focus is steadfast on helping the 100M+ developers on GitHub write more secure code in real-time, leading to more secure software outcomes, faster.

How does GitHub collaborate with the open-source community to fulfill its commitment to securing the ecosystem?

Vulnerabilities in open-source code can have a global ripple effect, impacting the billions of people and services that rely on it. Yet the fact of the matter is that critical open source software is often developed on a voluntary basis, and individual maintainers do not have the same resources or dedicated security teams as the businesses that rely on their code. With over 100 million developers calling GitHub home, we recognise the critical role GitHub plays in securing the software ecosystem and we’re heavily invested in our responsibility to support developers and open-source maintainers. This is why GitHub offers open-source projects access to free security features and toolings like secret scanning, code scanning, and Dependabot. GitHub also offers ways developers can gain access to funding and training that help secure open-source software. For instance, through GitHub Sponsors, individuals and organisations can financially support the developers who design, build, and maintain the open-source projects they depend on. Additionally, the GitHub Security Lab offers free security training and educational materials to developers as well as audits open source projects, helping maintainers fix security vulnerabilities. At GitHub, we firmly believe it’s important to be both a responsible consumer of software and a contributor back to it. Cybersecurity is a shared responsibility and we are committed to being an active participant in the security of the open source ecosystem.

How does GitHub ensure its AI-powered security features remain adaptive and capable of addressing emerging cybersecurity challenges?

We continue to invest across our platform and tooling to support open-source developers and maintainers to proactively secure their code. At this time with our currently released features, our priority is delivering the best results (i.e. high confidence alerts that will improve over time) and incorporating feedback from customers. Additionally, through work like CodeQL auto-modeling powered by LLMs, we’re rapidly innovating and increasing the coverage and capabilities of our security tools. We’re also committed to transparency and adherence to best practices to ensure the reliability of our tools. Third-party testing and audits are crucial measures to make sure we’re constantly assessing our technology’s efficacy and security. After all, selecting tools that have undergone rigorous testing and audits is an essential part of strengthening the security posture of any organisation.

AIsecuritytechnology
Comments (0)
Add Comment