Authors: Jan H. Klemmer, Stefan Albert Horstmann, Nikhil Patnaik, Cordelia Ludden, Cordell Burton Jr, Carson Powers, Fabio Massacci, Akond Rahman, Daniel Votipka, Heather Richter Lipford, Awais Rashid, Alena Naiakshina, Sascha Fahl
Published on: May 10, 2024
Impact Score: 7.8
Arxiv code: Arxiv:2405.06371
Summary
- What is new: This research investigates the balance between AI assistant usage and security in software development, particularly newly exploring the security implications and analysis of AI-generated code.
- Why this is important: The main problem addressed is the uncertainty of how software professionals use AI assistants in secure software development and what security concerns they face.
- What the research proposes: The solution proposed involves conducting interviews and analyzing social media discourse to understand software professionals’ perspectives on AI assistant usage for secure software development.
- Results: The findings suggest that despite security and quality concerns, AI assistants are widely used for security-critical tasks, with an overall mistrust that leads to the validation of AI suggestions paralleling human code review expectations.
Technical Details
Technological frameworks used: The paper leverages qualitative research frameworks for data collection and analysis.
Models used: nan
Data used: Data from 27 interviews and 190 Reddit posts and comments.
Potential Impact
This research impacts the software industry at large, including companies developing AI assistants like OpenAI and GitHub, and those using these tools for secure software development.
Want to implement this idea in a business?
We have generated a startup concept here: SecurAI Assist.
Leave a Reply