Security Risk of AI code editors
quality 7/10 · good
0 net
Security Risk of AI code editors : cybersecurity jump to content my subreddits edit subscriptions popular - all - users | AskReddit - pics - funny - movies - worldnews - news - todayilearned - nottheonion - explainlikeimfive - mildlyinteresting - DIY - videos - OldSchoolCool - europe - TwoXChromosomes - tifu - Music - books - LifeProTips - dataisbeautiful - aww - science - space - Showerthoughts - askscience - Jokes - Art - IAmA - Futurology - sports - UpliftingNews - food - nosleep - creepy - history - gifs - InternetIsBeautiful - GetMotivated - gadgets - announcements - de_IAmA - WritingPrompts - philosophy - Documentaries - Austria - EarthPorn - photoshopbattles - listentothis - blog more » reddit.com cybersecurity comments Want to join? Log in or sign up in seconds. limit my search to r/cybersecurity use the following search parameters to narrow your results: subreddit: subreddit find submissions in "subreddit" author: username find submissions by "username" site: example.com find submissions from "example.com" url: text search for "text" in url selftext: text search for "text" in self post contents self:yes (or self:no) include (or exclude) self posts nsfw:yes (or nsfw:no) include (or exclude) results marked as NSFW e.g. subreddit:aww site:imgur.com dog see the search faq for details. advanced search: by author, subreddit... this post was submitted on 09 Jan 2025 0 points (40% upvoted) shortlink: Submit a new link Submit a new text post cybersecurity join leave NOTICE: This sidebar and rules are no longer being updated. To see the current sidebar and rules you must view them on new reddit. https://sh.reddit.com/r/cybersecurity a community for 13 years MODERATORS message the mods 19 · 116 comments Mentorship Monday - Post All Career, Education and Job questions here! 119 · 64 comments I’m a cybersecurity and insider threat investigator focused on DPRK APTs and remote workers. AMA 481 · 37 comments If you're running OpenClaw, you probably got hacked in the last week 126 · 189 comments Hiring from a director of cyber's perspective. 121 · 57 comments I just experienced my first full-blown malware incident as an IT person 149 · 16 comments Claude Code Leak -> Exploit? Researchers found 3 shell injection bugs in the leaked source — all using shell:true with unsanitized input 35 · 2 comments The Axios supply chain attack used individually targeted social engineering 53 · 3 comments New Rowhammer attacks give complete control of machines running Nvidia GPUs 150 · 7 comments Adobe Data Breach 2026 via Indian BPO support firm by "Mr. Raccoon" 13 Someone is actively publishing malicious packages targeting the Strapi plugin ecosystem right now Welcome to Reddit, the front page of the internet. Become a Redditor and join one of thousands of communities. × 0 0 0 Security Risk of AI code editors News - General ( self.cybersecurity ) submitted 1 year ago by pssual I am looking at these AI code editors like cursor.ai or bolt.new. How are these leveraging LLM without any security Risk? Do companies allow ai code editors in their system? Is code sharing not a problem? If anyone here has undergone such a task what are the security Risk or challenges should be accounted for integration of ai code editors. 14 comments share save hide report all 14 comments sorted by: best top new controversial old random q&a live (beta) Want to add to the discussion? Post a comment! Create an account [–] PlanesFlySideways 4 points 5 points 6 points 1 year ago (2 children) As a software developer, I have zero desire to login and use any built in IDE AI since they send all that data to an outside source most of the time. It frankly scares me that it's being put right in our faces in IDEs as well as OS level stuff. The potential for data leaks is sky rocketing. Copilot Copilot Copilot ... Fuck off Copilot permalink embed save report [–] pssual [ S ] 0 points 1 point 2 points 1 year ago (1 child) I used a vscode plugin to try it out at first very convenient and easy to use. 😀 permalink embed save parent report [–] PlanesFlySideways 1 point 2 points 3 points 1 year ago (0 children) I'll probably try it out for personal projects eventually but there's no way I'd do it for work without express permission and guidelines. I generally use chatgpt to ask vague questions about how to do something and use those ideas to help guide me or find api's i didn't know existed permalink embed save parent report [–] Diligent_Ad_9060 3 points 4 points 5 points 1 year ago (1 child) I've been to seminars where people are happy to put internal data and IP into LLM prompts and consulants using openrouter during their assignments I guess most people don't care or are so caught up in the idea of generative AI being a paradigm shift in human history that there's no braincells left for risk assessments. permalink embed save report [–] pssual [ S ] 0 points 1 point 2 points 1 year ago (0 children) Hardly care for any risk assessments permalink embed save parent report [–] [deleted] 1 point 2 points 3 points 1 year ago (2 children) There is really only one answer. Either 1) The app downloads the LLM locally and runs all processing locally without sending anything anywhere, or 2) The app sends all of your code to some mystery servers somewhere and does processing there and you have to fully trust the provider to both a) not be malicious (e.g. they could send your code to chatGPT and send themselves another copy to review at their leisure) and b) protect your data to a level you are comfortable with. Most apps that are reputable will probably still have horrible security and are a big risk of leaking your code to parties whom you don't want having your code. permalink embed save report [–] pssual [ S ] 0 points 1 point 2 points 1 year ago (1 child) 1, running llm locally will have huge cost to be incurred which beats the llm being cost efficient 2, for sure cannot trust the process companies say in their T&c, privacy or security policy. Also, any GRC peeps reading this please have a go through the docs mentioned above. Let us know what you think permalink embed save parent report [–] Kind_Brick_8461 1 point 2 points 3 points 1 year ago (1 child) Security engineer here. AI code editors are tricky - most send your code to external APIs for processing. That's a potential data leak waiting to happen. At our company, we only allow self-hosted solutions or approved AI tools with proper DLP controls. The main risks: - Code/IP leakage to third parties - Sensitive data in comments getting processed - AI models potentially storing your code - Compliance issues (GDPR, HIPAA) Best approach? Create clear policies on AI tool usage and implement DLP rules to catch unauthorized editors. Some companies even use air-gapped environments for sensitive code. permalink embed save report [–] AliveRule3532 0 points 1 point 2 points 10 months ago (0 children) How is. Cursor is it safe? permalink embed save parent report [–] NoUselessTech Consultant 1 point 2 points 3 points 1 year ago (0 children) First, the idea that you can do anything without risk is humorous to me. I generally believe that if what you do has no potential for side effects, it's probably not doing anything. Second, companies 100% use AI tools for development today. Whether this is integrated co-cilot, devin, or something else it's in play. There are major quality of life improvements that can be had with using AI, especially for boiler plate applications or quick and dirty proof of concepts. The question of is the security risk greater than the business value of AI really comes down to the perspective of the business. If I'm writing national security impacting software, I might think twice. If I'm creating an eCommerce platform...it might not be that big of a deal. I'd wager than 90% of development is just recycling ideas that have already been implemented and we aren't "really" training the AI on something new. For that 10% of development where it truly is bespoke, either because it's a new algorithm, proprietary software solutions, drivers, etc. then I might be more cautious about what I share with the AI/LLM. The reasons are two fold, one being information sharing but also the more niche the request, the less likely the AI/LLM will be helpful or accurate. It's good at regurgitating what it has seen, but coming up with something actual novel (and useful) is still a growth area for models. ---- As an aside, go learn more about how to actually think about risk. The Fair Institute has several free resources and is a good model to start with. permalink embed save report [–] pssual [ S ] 0 points 1 point 2 points 1 year ago (0 children) Here's what I have observed so far Small and medium companies little or less funding are turning to llm enabled tools. Right from sales, marketing, bug and vuln handling Companies are ready to let go of their IPs because they want to stay relevant, also in some cases llm apps seems to reduce ops cost Majority of these llm tools are hardly fine tuned to the use cases companies are selling. Finally I think it's the investors who are pushing hard for ai adoption And what's with Subscriptions? I find this a bit like 🎣 permalink embed save report [–] [deleted] 0 points 1 point 2 points 1 year ago (0 children) Can’t Copilot be configured for “Commercial Data Protection” permalink embed save report [–] [deleted] 1 year ago (1 child) [removed] [–] cybersecurity-ModTeam [ M ] 1 point 2 points 3 points 1 year ago locked comment (0 children) Hi, please be mindful of rule #6 (no excessive promotion) as it looks like you are promoting the same entity too often. We ask that all community members are minimally biased and keep any promotion (self-promotion, promotion of a particular company's blog, etc.) under 10% of your posts and comments on the subreddit and under once per week. We explain the reasoning and requirements in depth here: https://www.reddit.com/r/cybersecurity/wiki/rules/promotion/ Thank you for reading and please reach out to modmail if you have any questions. permalink embed save parent report about blog about advertising careers help site rules Reddit help center reddiquette mod guidelines contact us apps & tools Reddit for iPhone Reddit for Android mobile website <3 reddit premium Use of this site constitutes acceptance of our User Agreement and Privacy Policy . © 2026 reddit inc. All rights reserved. REDDIT and the ALIEN Logo are registered trademarks of reddit inc. π Rendered by PID 45 on reddit-service-r2-loggedout-6b8756f69d-ftj8l at 2026-04-03 23:34:57.263929+00:00 running db1906b country code: AT.