Why does AI tell you to use Terminal so much?

eclecticlight.co · ingve · 2 days ago · view on HN · threat
0 net
Why does AI tell you to use Terminal so much? – The Eclectic Light Company Skip to content There’s a striking difference between troubleshooting recommendations made by AI and those of humans. If you’ve tried using AI to help solve a problem with your Mac, you’ll have seen how heavily it relies on commands typed into Terminal. Look through advice given by humans, though, and you’ll see they rely more on apps with GUI interfaces. Rather than sending you straight to fsck_apfs , for instance, most humans will prefer to direct you to Disk Utility and its First Aid feature. This is because most popular AI like ChatGPT, Claude and Grok is based on LLMs, Large Language Models, built on tokens for words. The great majority of humans using Macs work primarily in its GUI, using apps rather than Terminal commands. Verbalising the use of GUI apps is relatively difficult, and with the popularity of YouTube rather than written blogs, is becoming a dying art. Command tools are all very well, but in this context they have disadvantages, among them being: Few understand the commands used, and AI seldom explains those provided unless explicitly asked to do so. This results in the human learning little other than a long list of incomprehensible commands, and acquiring neither understanding nor insight. Commands often lack the safeguards normally built into in apps, so can more readily cause damage or mislead. Commands can readily generate overwhelmingly large output, even when piped to a file for later browsing. Pasting commands into Terminal has become a favourite way to trick the unsuspecting into installing malware . Even for an advanced user, confident assertions made by an AI can mislead. Here’s an example taken from ChatGPT’s advice on potential detections of malicious software. For this I have set ChatGPT’s output in italics, together with its commands as code . It’s worth noting that each command was explicitly given for the bash command shell, although in macOS Catalina (in 2019) Terminal changed to use zsh rather than bash as its default. ChatGPT starts this exchange with a teaser that few could resist: If you want, I can also show you how to tell in 10 seconds whether a Mac actually has malware despite these logs (there are 3 very specific indicators in Console). Vague predicate Here’s the most reliable way to see if macOS has ever detected malware on your Mac using Apple’s internal security logs. 1. Check the XProtect malware history Open Terminal and run: log show --predicate 'subsystem == "com.apple.XProtectFramework"' --last 30d This log show command doesn’t access “Apple’s internal security logs”, which don’t exist, but the Unified log. Nor does the recommendation use Console to access the log, as promised. The predicate given is too non-specific to be useful for the stated purpose, and the time period of 30 days is absurdly long and doesn’t match the claimed period either. On my Mac with the longest log records, its oldest Special tracev3 log file was created on 9 Feb 2026, and now has only 43 KB left from its original size of 11,298 KB, and the next oldest from 10 Feb has 34 KB left from 11,148 KB. On an older Intel Mac, the oldest creation date of a Special log file was 6 March, just 4 days previously, not 30. When run on the first of those on 10 March, with longer log records, that log show command returns two Fault entries from 11 Feb, and more complete records from Special logs starting on 2 March, but no full log records until 6 March. The predicate used is too general to select useful entries and is overwhelming, as it returns a total of 1,505 log entries for that command. Log entries a year old 2. Specifically check for remediation events To filter just real detections: log show --predicate 'eventMessage CONTAINS "remediat"' --last 365d If nothing prints, no malware has been removed in the last year. ChatGPT here defines “real detections” as those for which remediation was attempted. I know of no evidence from research to support that assumption. Given the limitations in retrieving log entries from 30 days noted above, it seems absurd to assume that any log entries will be retained from over 360 days ago. It’s malware only if blocked 3. Check Gatekeeper malware blocks Run: --predicate 'process == "syspolicyd" AND eventMessage CONTAINS "malware"' --last 365d ChatGPT has here redefined its original claim to limit it to malware that was blocked by Gatekeeper, and once again assumes it can retrieve log entries from over 360 days ago. No verification 4. Verify XProtect definitions are current Run: system_profiler SPInstallHistoryDataType | grep -i xprotect You should see recent updates like: XProtectPlistConfigData XProtectPayloads Apple updates these silently in the background. This is one of its most curious recommendations, as system_profiler is the command line interface to System Information, a familiar and far more accessible app. What that command does is look for the case-insensitive string “xprotect” in the Installations list. Unfortunately, it proves useless, as all you’ll see is a long list containing those lines, without any dates of installation or version numbers. On my older Mac, piping the output to a file writes those two words on 6,528 lines without any other information about those updates. I know of two ways to determine whether XProtect and XProtect Remediator data are current, one being SilentKnight and the other Skint, both freely available from this site. You could also perhaps construct your own script to check the catalogue on Apple’s software update server against the versions installed on your Mac, and there may well be others. But ChatGPT’s command simply doesn’t do what it claims. How not to verify system security Finally, ChatGPT makes another tempting offer: If you want, I can also show you one macOS command that lists every XProtect Remediator module currently installed (there are about 20–30 of them and most people don’t realize they exist). It’s a good way to verify the system security stack is intact. This is yet another unnecessary command. To see the scanning modules in XProtect Remediator, all you need do is look inside its bundle at /Library/Apple/System/Library/CoreServices/XProtect.app. The MacOS folder there should currently contain exactly 25 scanning modules, plus the XProtect executable itself. How listing those can possibly verify anything about the “system security stack” and whether it’s “intact” escapes me. Conclusions Of the five recommended procedures, all were Terminal commands, despite two of them being readily performed in the GUI. AI has an unhealthy preference for using command tools even when an action is more accessible in the GUI. None of the five recommended procedures accomplished what was claimed, and the fourth to “verify XProtect definitions are current” was comically incorrect. Using AI to troubleshoot Mac problems is neither instructive nor does it build understanding. AI is training the unsuspecting to blindly copy and paste Terminal commands, which puts them at risk of being exploited by malicious software. Previously Claude diagnoses the log Share this: Share on X (Opens in new window) X Share on Facebook (Opens in new window) Facebook Share on Reddit (Opens in new window) Reddit Share on Pinterest (Opens in new window) Pinterest Share on Threads (Opens in new window) Threads Share on Mastodon (Opens in new window) Mastodon Share on Bluesky (Opens in new window) Bluesky Email a link to a friend (Opens in new window) Email Print (Opens in new window) Print Like Loading... Related 15 Comments Add yours 1 Dear Howard. I help people with Apple computers & I recently received a phone call from a very distressed lady who had used ChatGPT to help with an issue on her MacBook. ChatGTP issued a set of instruction – shown below :- dot_clean -m ~/Desktop xattr -cr ~/Desktop sudo mdutil -E / defaults delete com.apple.finder diskutil resetUserPermissions I am unsure what the commands did but every file on her desktop was rendered un-openable. The lady went back to Chat who gave more instruction which failed. I sent her files to 2 Data recovery companies which could not open them – she found a data recovery expert who also failed. She ran a small chocolate company and everything concerning the company was saved on the desktop. Financial spreadsheets recipes everything totally destroyed. I googled the instructions given but got lost. I don’t know what finally happened but I imagine it was terminal.. I have decided to give ai a very wide berth.. Like Liked by 1 person is added by WordPress automatically --> 2 Horrific! Did she not have backups, though? Howard. Like Like is added by WordPress automatically --> 3 I wonder if ChatGPT based its reply from commands derived from Simon Sheppard’s ss64.com site? I’d say Simon developed the site in good faith but I think he needs to stress the potential harm in using some of the more powerful commands and combinations of commands by inexperienced users. Like Liked by 1 person is added by WordPress automatically --> 4 Unfortunately Not. M Like Liked by 1 person is added by WordPress automatically --> 5 I don’t see anything wrong with that site, though – it’s just a compilation of useful man pages with some added info. I suspect those originated in the many other unguarded postings elsewhere. There’s no shortage! Howard. Like Liked by 1 person is added by WordPress automatically --> 6 I noticed chatgpt has become more aggressive in proposing follow ups, even when they have low value or go off topic. It seems to be more of a strategy to keep people “algorithmically engaged” rather than solving the initial request. Like Liked by 2 people is added by WordPress automatically --> 7 Indeed. It is quite manipulative in this session. Howard. Like Like is added by WordPress automatically --> 8 Could it be because the GUI source materiel contains images and those images needed to be preprocessed to turn them into text that could be incorporated into the model? Could that mean there are fewer examples in the model or the GUI input is less “common”? Would this difference be aggravated for models that are generating their own training data? Like Liked by 1 person is added by WordPress automatically --> 9 Good accounts can be almost as well-illustrated as a comic. I have yet to see any comic created by AI. How do you turn a screenshot of an app in action into words? The relatively few cases that I have seen AI attempt to steer folk through using the GUI, it’s clear that it’s hopelessly lost, and often gives erroneous instructions, e.g. for menu commands that don’t exist. It also doesn’t know the names of tools or controls. Howard. Like Like is added by WordPress automatically --> 10 I think you and I are in agreement. While there are some models that will generate a description of an image (and models that will generate an image from a description). I assume that these do not work well for GUI tutorials. After all, LLMs are, by and large, next word generators. They do not understand anything and if most of the tutorials for GUI use are images/videos, then that will not be in the training data used to build the model. Like Liked by 1 person is added by WordPress automatically --> 11 Indeed. Howard. Like Like is added by WordPress automatically --> 12 Forgive me, Howard, if you’ve mentioned this already, but it doesn’t hurt to post it again: The Terminal has lately become the attack vector of choice for malware distributors, including state actors, e.g., North Korea. These attacks do an end run around all the protections built into macOS, and which you, Howard, so painstakingly explain and address here. All of the attacks I’ve seen use the curl command to download malware to the gullible and manipulated victim’s Mac. Most of them originate in fake “Mac help” sites, positioned to appear first in search results. IF YOU DON’T KNOW WHAT YOU’RE DOING IN TERMINAL, PASTING IN COMMANDS YOU KNOW NOTHING ABOUT IS NOT THE PLACE TO LEARN. YOU ARE ALMOST CERTAINLY GOING TO GET BURNED. Mac security researcher Patrick Wardle discusses these “Clickfix” attacks here: https://objective-see.org/blog/blog_0x85.html Patrick has built a mechanism into his “Block Block” security application to trap commands copied and pasted into Terminal, but all it can do is issue a warning. As far as this article goes, this is AI at its worst: regurgitated, plausible (to the novice) nonsense, at best harmless. At worst — at worst, your Mac is going to become Terminal. -Mark Like Liked by 1 person is added by WordPress automatically --> 13 Thank you, Mark, yes it’s always good to repeat. Howard. Like Like is added by WordPress automatically --> 14 User had a crash when starting one of my apps which I couldn’t reproduce, though he did send the crash report (always send a crash report, please!). I turned to Google AI which told me about the useful atos command to find the file/line number of the crash. Eventually it claimed I had to set NSRestorableStateKey in UserDefaults. There is no NSRestorableStateKey nor was there ever. Nice. Buyer beware, I guess. Turns out it was a race condition because Apple changed the way AppleScriptObjC was initialized. It had to be initialized on the main thread otherwise XProtect would hang. Assuming it didn’t crash for me because XProtect had already cleared the app previously? Lack of Apple documentation strikes again. Like Liked by 1 person is added by WordPress automatically --> 15 Thank you. Howard. Like Like is added by WordPress automatically --> Leave a comment Cancel reply Δ Quick Links Free Software Menu System Updates M-series Macs Mac Troubleshooting Summary Mac problem-solving Painting topics Painting Long Reads Search Search for: Monthly archives March 2026 (34) February 2026 (71) January 2026 (72) December 2025 (75) November 2025 (74) October 2025 (75) September 2025 (78) August 2025 (76) July 2025 (77) June 2025 (74) May 2025 (76) April 2025 (73) March 2025 (78) February 2025 (67) January 2025 (75) December 2024 (74) November 2024 (73) October 2024 (78) September 2024 (77) August 2024 (75) July 2024 (77) June 2024 (71) May 2024 (79) April 2024 (75) March 2024 (81) February 2024 (72) January 2024 (78) December 2023 (79) November 2023 (74) October 2023 (77) September 2023 (77) August 2023 (72) July 2023 (79) June 2023 (73) May 2023 (79) April 2023 (73) March 2023 (76) February 2023 (68) January 2023 (74) December 2022 (74) November 2022 (72) October 2022 (76) September 2022 (72) August 2022 (75) July 2022 (76) June 2022 (73) May 2022 (76) April 2022 (71) March 2022 (77) February 2022 (68) January 2022 (77) December 2021 (75) November 2021 (72) October 2021 (75) September 2021 (76) August 2021 (75) July 2021 (75) June 2021 (71) May 2021 (80) April 2021 (79) March 2021 (77) February 2021 (75) January 2021 (75) December 2020 (77) November 2020 (84) October 2020 (81) September 2020 (79) August 2020 (103) July 2020 (81) June 2020 (78) May 2020 (78) April 2020 (81) March 2020 (86) February 2020 (77) January 2020 (86) December 2019 (82) November 2019 (74) October 2019 (89) September 2019 (80) August 2019 (91) July 2019 (95) June 2019 (88) May 2019 (91) April 2019 (79) March 2019 (78) February 2019 (71) January 2019 (69) December 2018 (79) November 2018 (71) October 2018 (78) September 2018 (76) August 2018 (78) July 2018 (76) June 2018 (77) May 2018 (71) April 2018 (67) March 2018 (73) February 2018 (67) January 2018 (83) December 2017 (94) November 2017 (73) October 2017 (86) September 2017 (92) August 2017 (69) July 2017 (81) June 2017 (76) May 2017 (90) April 2017 (76) March 2017 (79) February 2017 (65) January 2017 (76) December 2016 (75) November 2016 (68) October 2016 (76) September 2016 (78) August 2016 (70) July 2016 (74) June 2016 (66) May 2016 (71) April 2016 (67) March 2016 (71) February 2016 (68) January 2016 (90) December 2015 (96) November 2015 (103) October 2015 (119) September 2015 (115) August 2015 (117) July 2015 (117) June 2015 (105) May 2015 (111) April 2015 (119) March 2015 (69) February 2015 (54) January 2015 (39) Tags APFS Apple Apple silicon backup Big Sur Blake Bonnard bug Catalina Consolation Console Corinth Disk Utility Doré El Capitan extended attributes Finder firmware Gatekeeper Gérôme High Sierra history history of painting iCloud Impressionism landscape LockRattler log M1 Mac Mac history macOS macOS 10.12 macOS 10.13 macOS 10.14 macOS 10.15 macOS 11 macOS 12 macOS 13 macOS 14 macOS 15 malware Metamorphoses Mojave Monet Monterey Moreau myth narrative OS X Ovid painting performance Pissarro Poussin privacy Renoir riddle Rubens Sargent security Sierra SilentKnight Sonoma SSD Swift Time Machine Tintoretto Turner update upgrade Ventura xattr Xcode XProtect Statistics 21,825,661 hits Search for: Begin typing your search above and press return to search. Press Esc to cancel. Comment Reblog Subscribe Subscribed The Eclectic Light Company Join 9,073 other subscribers Sign me up Already have a WordPress.com account? Log in now. Privacy The Eclectic Light Company Subscribe Subscribed Sign up Log in Copy shortlink Report this content View post in Reader Manage subscriptions Collapse this bar Loading Comments... Write a Comment... Email (Required) Name (Required) Website %d