New AI algorithms developed by computer scientists at Tel Aviv University in Israel don’t require a visual description, according to a paper published on Cornell University’s Arxiv.org site.
The algorithms can generate photos of food from text recipes that list the ingredients, and the method of preparation, but they don’t include any visual description of how the final plate looks.
The AI wasn't allowed to read the title of the recipe; it exclusively used the ingredients and the instructions to create the photo, which shows a capacity for abstraction that until now many assumed computers could not do.
The method relies on stacked Generative Adversarial Networks (GANs), according to the report. The scientists call this process text embedding, and it's designed to understand what's on the page by semantic mapping to other pieces of content.
Then a GAN analyzes those vectors and compares them to other descriptions of more than 50,000 photos of food in the real world; it then generates synthetic photos from new recipes.
Giving those food Instagrammers yet another tool to annoy the rest of us.