Website powered by

Giuliana

The night before her 18th birthday, Giuliana fell asleep on Earth for the last time. Later, she realized it was far from the first time she'd been abducted. Numerous troubling occurrences—occasionally inexplicable happenings that had punched confounding holes throughout the continuity of her childhood & adolescent memories—suddenly all pieced themselves together into a coherent, canonical whole, for which her current situation, light years removed from Earth, became the only possible, inevitable adulthood.

Her hosts had no need for spoken language, communicating instead via rapid exchange of telepathic streams, each comprised of series of flashing images & sensory echoes. She knew this only because she could effortlessly receive & send these streams herself, a capability she quickly learned was not an especially common trait amongst humankind. Only humans such as herself—with this rare gift of mental gab—were granted employment at the facility, along with the privilege to freely roam the station when off the clock, limited only by clearly indicated atmospheric & gravitational boundaries, beyond which no wandering human was likely to survive for long.

This earthling-friendly area was massive, kilometers across & equally deep. It would take her several months' worth of near-daily excursions to see it all: hundreds of sizable, transparent exhibition spaces where the telepathically-impaired were kept, isolated or in small groups: a literal human zoo…

[…for more of Giuliana's story + further/alternate/exclusive image variations, visit the NEW official inhumantouch.art website…]

THE MAKING OF…

This is the 2nd posted example (Elli being the 1st) of my revised general workflow, which does not involve DALL-E 2 at any point during its creation. Of course, this week brought with it the long-awaited announcement of DALL-E 3, but I have yet to experiment with that, as for the moment it's tied to a premium ChatGPT subscription, which I don't have and don't really want to buy, because (at least for me) generative text models have proven far less interesting/inspiring than the ongoing advancements in generative image models.

So, in Giuliana's case, LVL0 was initially created in Midjourney's Discord and expanded using v5.2's new "zoom out" feature, roughly equivalent to the "outpainting" available in other tools, and/or Photoshop's new "generative expand." Midjourney also allows for inpainting now, called "vary region." Although I've become very familiar with the workings of Midjourney over the course of 2023, I'm still eager for it to become a website or an app and divorce itself from Discord. Using even a single AI platform outside of Photoshop necessitates increased amounts of time spent managing multiple files, and this only expands dramatically the more AIs one involves in the process. Midjourney produces 4 new images for every prompt, zoom, or varied region, and offers no way to combine elements from different outputs prior to download/export.

Photoshop is always going to be my central base of operations for working on raster images, so everything naturally gets imported into a PSD ASAP, arranged in layers and folders to create the different levels and make sure they match up when enlarged and/or reduced, and I've recorded dozens of Actions to keep grunt work and repetition to a minimum. But as soon as I make the first manual tweak in Photoshop, in almost every circumstance, that action cuts Midjourney out of the workflow from then on, as one can never upload altered images back into Midjourney for further editing (you can upload them into Mj if you like, but only to be used as image prompts or blend sources for entirely new images... Another caveat that makes me miss DALL-E 2's simple, elegant canvas interface). There's no such back-and-forth/in-and-out with Midjourney, there's only always out…

There is also no real "zoom in" feature in any of the tech at play here, so to get more close-up variations (negative LVL numbers), I have been resorting to REALSR command-line image upscaler, and then using those upscaled images as image prompts in an online instance of Stable Diffusion called mage.space, setting the influence of the existing image to a significant strength. With some experimentation (helped along considerably by the recent introduction of more advanced, higher resolution SDXL image models), I have been able to greatly increase the level of realistic detail in the eyes and other facial features (faces pretty much always being the target toward which I'm zooming in), while taking care not to end up with a completely different-looking face. It's been taking a dozen or so attempts per face in general, and most of those end up getting quickly discarded after comparison, as it hasn't seemed necessary to combine features from more than 2 or 3 of these "refacings" at maximum to sufficiently enhance any single visage, while also keeping them from looking like an obvious, disparate collage.

…to be continued…

Giuliana 3431FC LVL-1~

Giuliana 3431FC LVL-1~

Giuliana 3431FC LVL0

Giuliana 3431FC LVL0

Giuliana 3431FC LVL1

Giuliana 3431FC LVL1

Giuliana 3431FC LVL2

Giuliana 3431FC LVL2

Giuliana 56DDE8 LVL0

Giuliana 56DDE8 LVL0