How companies sneakily use AI on you, and how you can fight back
I’m a software engineer working in AI, and I was still quite stunned by what just happened to me.
Here’s the story.
I’m looking for a summer rental in Boston, so I used an app to search for furnished rentals and indicate interest in some properties. Got back a couple emails, obviously templates. Nothing unusual here, you’re supposed to reply, then someone at the agency gets in touch.
I replied to the message detailing what I’m looking for, since I wasn’t sure the app passed that info along:
Note the time, 11:46pm on a Sunday night. To my pleasant surprise, Ivy replied within three minutes:
Now if you’ve used ChatGPT at all, that kind of phrasing,
We offer 2-bedroom apartments that align with your preferences for modernity, quietness, and proximity to fitness and green spaces.
should immediately stand out. Note that the reply said nothing about furnished apartments. So I took a more careful look at “Ivy”s email:
Great, I’ve been exchanging emails with a chatbot. To their credit, they did disclose that in the fine print at the bottom of the email (but who reads those; they actually get hidden by some email software).
Alas, let’s make the best of this. Since we’re talking to some sort of chatbot, let’s use prompt engineering to ask precisely about furnished rentals.
This is the fun part. Grab your popcorn. 🍿🍿🍿
There you go — when companies use AI on you, you can use their AIs back! With a bit of prompt engineering (basically just very clear communication), you can get the information you want, without a human actually trying to deceive you.