The night before her 18th birthday, Giuliana fell asleep on Earth for the last time. Later, she realized it was far from the first time she'd been abducted. Numerous troubling occurrences—occasionally inexplicable happenings that had punched confounding holes throughout the continuity of her childhood & adolescent memories—suddenly all pieced themselves together into a coherent, canonical whole, for which her current situation, light years removed from Earth, became the only possible, inevitable adulthood.
Her hosts had no need for spoken language, communicating instead via rapid exchange of telepathic streams, each comprised of series of flashing images & sensory echoes. She knew this only because she could effortlessly receive & send these streams herself, a capability she quickly learned was not an especially common trait amongst humankind. Only humans such as herself—with this rare gift of mental gab—were granted employment at the facility, along with the privilege to freely roam the station when off the clock, limited only by clearly indicated atmospheric & gravitational boundaries, beyond which no wandering human was likely to survive for long.
This earthling-friendly area was massive, kilometers across & equally deep. It would take her several months' worth of near-daily excursions to see it all: hundreds of sizable, transparent exhibition spaces where the telepathically-impaired were kept, isolated or in small groups: a literal human zoo…
[…for more of Giuliana's story + further/alternate/exclusive image variations, visit the NEW official inhumantouch.art website…]
ABOUT MY PROCESS
This is the 2nd posted example (Elli being the 1st) of my 2023 revised general workflow, which did not involve DALL-E 2 at any point during its creation. The week Giuliana was originally published brought with it the long-awaited announcement of DALL-E 3, but that turned out to be a massive disappointment, at least in terms of being able to consistently/usefully re-integrate it into my creation process.
So, in Giuliana's case, LVL0 was initially created in Midjourney and expanded using v5.2's new "zoom out" feature, roughly equivalent to the "outpainting" available in other tools, and/or Photoshop's "generative expand." Midjourney also introduced inpainting with this version, calling their implementation of the feature "vary region." Although I'd become very familiar with the workings of Midjourney over the course of 2023, I would still be waiting another year for it to become a website and finally divorce itself from Discord. Using even a single AI platform outside of Photoshop necessitates increased amounts of time spent managing multiple files, and this only expands dramatically the more AI's one involves in the process. Midjourney produces 4 new images for every prompt, zoom, or varied region, resulting in a lot of upscaling, exporting, downloading, renaming, and importing into Photoshop. Even their recently released online editor, which finally allows for external images to be worked on with Midjourney, still offers no way to combine elements from different outputs prior to download/export.
Obviously, Photoshop is always going to be my central base of operations for working on raster images, so everything naturally gets imported into a PSD ASAP, arranged in layers and folders to create the different levels and make sure they match up perfectly when enlarged and/or reduced, and I've recorded dozens of Actions (macros) to keep grunt work and repetition to a minimum.
There was also no real "zoom in" feature yet in any of the tech at play. AI upscalers and enhancers would become more commonplace over the next year, but at the time, to get more close-up variations (negative LVL numbers), I resorted to a command-line image upscaler called REALSR, then using those upscaled images as image prompts in an online instance of Stable Diffusion called mage.space, setting the influence of the existing image to a significant strength. With some experimentation (helped along considerably by the recent introduction of more advanced, higher resolution SDXL image models), I have been able to greatly increase the level of realistic detail in the eyes and other facial features (faces pretty much always being the target toward which I'm zooming in), while taking care not to end up with a completely different-looking face. It's been taking a dozen or so attempts per face in general, and most of those end up getting quickly discarded after comparison, as it hasn't seemed necessary to combine features from more than 2 or 3 of these "refacings" at maximum to sufficiently enhance any single visage, while also keeping them from looking like an obvious, disparate collage.