🌟 Photo Sharing Tips: How to Stand Out and Win?
1.Highlight Gate Elements: Include Gate logo, app screens, merchandise or event collab products.
2.Keep it Clear: Use bright, focused photos with simple backgrounds. Show Gate moments in daily life, travel, sports, etc.
3.Add Creative Flair: Creative shots, vlogs, hand-drawn art, or DIY works will stand out! Try a special [You and Gate] pose.
4.Share Your Story: Sincere captions about your memories, growth, or wishes with Gate add an extra touch and impress the judges.
5.Share on Multiple Platforms: Posting on Twitter (X) boosts your exposure an
MIT unveils PhotoGuard tech that protects images from malicious AI edits
Written by: Andrew Tarantola
Source: Engadget
Dall-E and Stable Diffusion are just the beginning. Chatbots on the internet are gaining the ability to edit and create pictures, with companies like Shutterstock and Adobe leading the way, as AI-generated systems gain popularity and companies work to differentiate their products from those of their competitors . But these new AI capabilities also pose familiar problems, such as unauthorized tampering or outright misappropriation of existing online works and images. Watermarking technology can help reduce the latter problem, while new "PhotoGuard" technology developed by MIT CSAIL can help us prevent the former.
It is reported that PhotoGuard works by changing some pixels in the image, thereby destroying the ability of AI to understand the content of the image. These "perturbations," as the research team calls them, are invisible to the human eye but easy to read for machines. The "encoding" attack method that introduces these artifacts targets the algorithmic model's underlying representation of the target image -- the complex mathematics that describe the position and color of each pixel in the image -- essentially preventing the AI from understanding what it is looking at . (Note: Artifacts refer to various forms of images that do not exist in the scanned object but appear on the image.)
"The encoder attack makes the model think that the input image (to be edited) is some other image (such as a grayscale image)," Hadi Salman, a Ph.D. student at MIT and the paper's first author, told Engadget. "The Diffusion attack forces the Diffusion model to edit some of the target images, which can also be some gray or random images." Protected images for reverse engineering.
"A collaborative approach involving model developers, social media platforms, and policymakers can be an effective defense against unauthorized manipulation of images. Addressing this pressing issue is critical today," Salman said in a release. "While I'm excited to be able to contribute to this solution, there is still a lot of work to do to make this protection practical. Companies developing these models need to invest in targeting the threats these AI tools may pose for robust immune engineering."