In late March, the world gasped after photos emerged showing police arresting former President Donald Trump.
In one of the photos, officers grip Trump’s arms and lead him away. Others show Trump running from police, and still more images depict Trump’s family members looking on in distress. It was a scene unlike anything that had happened previously in American history.
And as it turns out, it still hasn’t happened.
That’s because the photos weren’t photos at all. Instead, they were the handiwork of an artificial intelligence image generator called Midjourney. The images first appeared on Twitter, before going viral on other platforms such as TikTok. Initially, many users assumed the shots — which at a glance looked believable — were real and subsequently shared them as such. (Trump’s actual arrest a couple weeks later was a far less dramatic and photographed affair.)
But before long, eagle-eyed viewers noticed errors in the images and soon dozens of news outlets wrote posts debunking them.
But the episode nevertheless captured the awesome, terrifying and sudden power of artificial intelligence image generators. Seemingly overnight, ordinary people gained the ability to conjure realistic-but-fake imagery that in the past might have required extensive training or a Hollywood budget. And while the Trump photos were widely seen as a cautionary tale about misinformation, others have used AI tools to create fine art, illustrate children’s books, and assemble professional branding materials.
All of which is to say that AI image generators are now achieving unprecedented levels of sophistication, and are thus poised to disrupt industries such as real estate. That reality raises a host of challenges, but it also means real estate professionals are about to have new opportunities to streamline their work, extend their reach and integrate technology in ways that even months ago were unimaginable.
In that light, here’s what you need to know about AI imagery.
Table of Contents
- Why is there suddenly so much buzz about AI-generated images?
- The big three image generators
- How can agents use AI-generated images in real estate?
- Tackling the learning curve
- Big risks and big rewards
Why is there suddenly so much buzz about AI-generated images?
As Inman previously reported, the current frenzy over AI began in December when technology company OpenAI debuted the free, public version of its chatbot, ChatGPT.
OpenAI has similarly driven the explosion of interest in AI image generators. The company first announced its DALL-E tool — named for artist Salvador Dalí and Pixar robot WALL-E — back in 2021. It debuted an upgraded version called DALL-E 2 last spring, then made that tool widely available to everyone in September.
A handful of competing image generators have popped up along the way, but DALL-E’s debut and growth alongside ChatGPT is the biggest driver of attention to the sector.
The big three image generators
Much as is the case with chatbots, there are numerous AI-image generators out there right now. But in recent months, three have emerged as the dominant players. Inman tested those three by asking them to create an image based on the following prompt:
a house on a hill surrounded by flowers, with the milky way galaxy visible in the background
The goal of the experiment was to test the AI, but in this case Inman was also trying to produce header images for a recent series of stories on the spring market. So it wasn’t an entirely academic exercise.
Here’s how the experiment went:
Stable Diffusion
Stable diffusion is by far the most user-friendly of the big-name image generators. Users don’t need to sign in or sign up. You simply go to the website, enter a prompt and wait seconds or minutes (depending on how busy the site is). Better still, it’s free and there are no limits on the number of prompts users can enter. As a result, Stable Diffusion is a good entry point for anyone wanting to test the AI waters.
Here are the images Stable Diffusion produced from that prompt:
The pictures above are not terrible per se, but if you look closely they have an uncanny valley quality to them. And they highlight one of Stable Diffusion’s big tradeoffs; the platform is free and easy to use, but its images often aren’t as impressive as those of some rival platforms.
Luckily, Stable Diffusion has a section of its website devoted to effective prompts, so it’s easy to get used to the platform and figure out how to generate better images.
DALL-E 2
Dall-E 2 is still probably the best-known AI image generator, and Inman has used images from the platform several times as headers for stories. Here’s how it responded to the prompt:
The tool is a powerful one. That said, the results above aren’t terribly inspiring. They’re not quite as uncanny as Stable Diffusion’s images, but they’re also drab and dark. That’s not to say all of DALL-E 2’s images are drab — this Inman story features a bright drawing-style image of rockets that the platform created — but the images don’t just turn out spectacular if a user doesn’t figure out how to write effective prompts.
Users have to sign up to use DALL-E 2. The site operates using “credits,” with each new prompt costing one credit. Most users receive 15 free credits each month, which isn’t enough to do much experimenting, though depending on when a person signs up they may also receive an initial bucket of free credits. The site currently charges $15 for 115 additional credits.
Midjourney
Midjourney is probably the buzziest AI image platform, and for good reason: It consistently generates incredible shots. The images it came up with for the prompt were far and away better than what DALL-E 2 or Stable Diffusion created:
After some tweaking via follow-up prompts, Inman ultimately used Midjourney’s images for the headers on the spring market series.
Midjourney images are also good enough that they consistently go viral; in addition to the Trump episode, there was also the so-called “Balenciaga Pope” images that made the rounds on social media last month.
But Midjourney’s downside is that the learning curve to get started is very steep. The platform is not actually a website on its own, but rather works via Discord, a social and messaging platform. So, would-be users first have to sign up for Discord, choose a “server” in which to participate, figure out the basic code to get the bot to start working (prompts need to be preceded by “/imagine”), and then figure out where those images are going to show up.
Once you get used to the system, it’s fairly easy. But Inman’s first several attempts to use Midjourney involved a lot of Googling of instructions.
Midjourney has historically offered a free trial version of its platform, but in recent days has cut off the free version, citing excess demand. So an additional part of the learning curve now involves signing up and paying for a subscription as well. As of this writing, Midjourney’s cheapest tier starts at $8 per month. The most expensive tier costs $48 per month.
How can agents use AI-generated images in real estate?
While chatbots have obvious applications in real estate, AI images are simultaneously more thrilling but harder to deploy. Agents, after all, don’t necessarily need original imagery of entirely artificial environments. That said, here are a few of the ways industry pros are using this technology:
Headshots
Headshot generation — for which there are a number of providers — is one of the more popular ways agents are using AI.
Lauren King — an agent with PureWest Christie’s International Real Estate in Whitefish, Montana — is among those who have given the concept a try, telling Inman she used the company Try it On. The service asks users for 10 to 20 original photos, then after a day or two sends back about 100 AI generated headshots. The service cost King $17.
These are two of the original images King sent to Try it On:
Of the more than 100 images Try it On created, King said there were a handful that she liked and might actually use.
“I would say I would feel comfortable using five or eight,” she said. “They’re the ones I think no one would question.”
Here are two of those images:
The images Lauren shared with Inman show that Try it On added warm, golden hour lighting and a shallow depth of field (meaning the background is out of focus), among other things. They look professional.
But the results also varied, with King saying that many of the photos either looked nothing like her, had strange lighting, or were otherwise unusable.
Here are a couple of the less successful examples, which failed to accurately capture a number of King’s facial features. The second image in particular also has an oddly unreal quality.
Despite some of the weirder renderings, King spoke positively of her experience.
“It was a fun experiment,” she said, adding that it was an easier process than shooting actual headshots. “I find photo shoots and headshots to be kind of stressful.”
Deena Serna agreed that shooting headshots can be “a pain” and told Inman that she too consequently used Try it On for headshots. Serna, a Compass agent in Vero Beach, Florida, said that of the 100 or so images she received, many “came out just messed up,” with strange looking eyes and teeth. But like King, she received enough usable images to make the experiment worth it.
“There was a handful that turned out pretty good,” she said. “I’ll definitely use them. The resemblance to my real natural self is passable.”
Such sentiments are circulating rapidly through real estate social media groups. And while the weirder results highlight AI’s current shortcomings, this is also one of the primary frontiers in which artificial intelligence-generated real estate content is actually making it out into the real world.
Renderings and visualizations
Another buzzy way that people are deploying AI visualizations in real estate is via renderings of buildings. The images below, for example, are AI-generated shots of townhouses.
The Twitter user who generated these images describes them as having “immaculate design,” and it’s hard to argue with that assessment. The user also suggests that it’s only a matter of time before such images can be input into the software that architects use to design actual buildings.
From there, it’s also not a huge stretch to imagine a full AI-based building pipeline that begins with pretty pictures and ends with 3D-printed buildings.
That’s a speculative outcome, but already real estate professionals are imagining ways to deploy this kind of visualization technology to smooth out the home search process. For instance Dave Jones, co-owner of Windermere Abode in Tacoma, Washington, told Inman he could imagine using AI to help clients better visualize and communicate what kind of property they want.
“What if I was helping someone who wasn’t there and they could explain to me what the house they wanted looked like,” he said. “So now I have a visual of what you’re looking for.”
Kent Czechowski, chief data scientist at OJO, told Inman that these tools might also improve existing real estate visualization and virtual staging tools.
“For example, AI tools can let a homebuyer imagine what a currently empty living room would look like if their furniture and home decor were present, or help a prospective seller see what an enhancement to their property would look like or even cost,” Czechowski said.
A number of real estate companies are currently experimenting with enhanced AI technology. Inman will dive into such tools in a later post.
Marketing content
Real estate professionals can also use AI in their marketing, according to Nick Niehaus, co-founder at real estate training firm Business Video School.
“It should be used in marketing, and I think it should be used even today,” Niehaus said. “Marketing is all about saying something different than your competition and that’s something the tools allow us to do more efficiently.”
Niehaus has spent recent months experimenting with different AI platforms, and said that tools such as Midjourney and DALL-E offer both speed and originality advantages. They can also help agents create a cohesive visual marketing campaign that translates across mediums.
“You can make a lot more images a lot faster,” he added. “You can have a postcard campaign that drives them to a landing page and leads to a series of emails. And you can have all of that really in minutes.”
Czechowski further noted that AI may let real estate professionals better target their marketing to their clients.
“The proliferation of content will lead to an explosion of hyper-localized and highly-targeted content,” he said. “Consumers will expect an experience tailored to their personal preferences, communication styles, and timelines.”
While this type of content is a cutting edge AI business application, it’s apparent that it’s also becoming more popular: Because Midjourney operates on Discord, users can see each others’ prompts and results, and mixed in with the endless fantasy landscapes is a large number of company logos and other branding images.
Tackling the learning curve
While the potential of AI image generators is significant, it is also not necessarily easy to use them. Or at least, they don’t automatically produce great images.
Niehaus, for example, said that in order to generate the kind of comprehensive AI-based marketing campaign mentioned above, a user might need to input anywhere from 20 to 50 different prompts. And those prompts need to use language that the AI platform understands.
“A lot of folks they’ll put in one, or maybe a couple prompts, and be like ‘I’m not getting what I want,'” he said. “The way you talk to Google, to search for something, I think that’s a good analogy for what we’re learning here.”
Among other things, Niehaus said that image generators generally want users to enter as few words as possible. Additionally, the order of words matters, with those at the beginning of the prompt exerting greater influence on the end result.
“You don’t want to speak in complete sentences,” he added, drawing a contrast between image generators and chatbots, the latter of which tend to excel at human-like speech.
Niehaus also advised users to think critically about the product they want to produce. He suggested, for example, that people with a photography background tell the image generators what kinds of lenses and lighting temperatures they’re looking for in images. People who have a particular artist’s style in mind should include that artist’s name in the prompt.
“You might say, ‘brownstone house, female real estate agent in front, cloudy day, 5600k lighting,’ and include a certain kind of lens,” Niehaus explained, adding that learning to write effective prompts takes time.
Niehaus’ suggestions, however, raise a subtle but important distinction between the way AI images work compared to more conventional searches for photography. In the past, someone looking for an image to put on a website or piece of marketing might have visited a photo gallery, viewed a variety of disparate shots, and then decided what to use from among those options. In other words, a user might start with only a vague idea of what image they want, and then come to a decision during the search process.
However, generating an image via AI is entirely different. Instead of using vague search terms, users have to have a specific idea already in mind of what they’re trying to generate. So, the moment at which a user decides on an image happens much sooner in the process, before the search. It involves, in other words, a creative process more akin to painting than to scrolling through a gallery on Getty Images.
Big risks and big rewards
As is the case with chatbots, the rise of AI image generators has prompted a number of questions about the future of work and intellectual property.
For example, one contentious issue surrounding AI image generators revolves around how much their products pull from existing work. In some case, AI images have even included what looks like the signatures of real life artists, prompting concerns that the bots are plagiarizing — and eating up the business of — real people. Such concerns have since prompted multiple lawsuits, and it remains to be seen how intellectual property laws might ultimately apply to the gamut of AI art.
These issues could impact how much real estate professionals lean on AI imagery in the long term. In the shorter term, AI’s reliance on existing artwork may also make it harder and harder to generate truly original content — or at least for beginners to do so without serious prompt-entering chops.
Beyond intellectual property, there’s also risks when it comes to misinformation. The Trump arrest images highlight this issue, but Niehaus also noted that “deep fakes,” which can superimpose one person’s face onto another’s body, raise a number of ethical considerations. Niehaus additionally noted that there’s the potential for AI to replace jobs, such as those that have traditionally revolved around content creation.
“It looks like a lot of these tools will be equipped to replace white-collar jobs more than blue-collar jobs,” he said.
Niehaus even imagines a future in which consumers interact with AI-based real estate agents.
“If you can combine a chatbot with a live video that looks like a real person, even if you tell people it’s AI, I do think there’s a segment of the population that’ll be okay with that,” he said.
That may sound ominous, and indeed for some agents it may be. As more AI tools are able to work together, their collective reach and potential disruptive qualities will get bigger and bigger.
But Niehaus also framed the present shift as an opportunity. Agents who learn how to use AI might be able to extend their reach to far off locations by offering consumers in new markets a chatbot version of themselves. And whether that specifically happens or not, Niehaus’ point was that agents who fail to explore AI are at the greatest risk, both from the technology itself and from their human rivals who learn the ropes.
“That’s why I’m really emphasizing the idea of learning the tools,” Niehaus said. “We can’t predict exactly where it’s going to go, but there’s a lot of value in agents really experimenting with it right now.”