According to a new report by Google, recent advancements have allegedly decreased Gemini’s water use per prompt to minimal levels. Google’s calculations indicate its applications use merely 0.24 watt hours of electricity and 0.26 ml of water per text prompt, contradicting previous estimates. This is notably lower than the 45ml to 47.5ml of water that other AI models require for comparable text generation.

However, experts like Shaolei Ren from UC Riverside argue Google’s figures are misleading because they only consider onsite water use, ignoring the broader environmental impact of energy consumption in data centers. Typically, data centers consume water both onsite and in the energy generation process. Google’s assessments omit the water involved in offsite power generation, an oversight that casts doubt on their claims.

Despite Google’s assertion that UC Riverside’s calculations are flawed, experts say that for an accurate comparison, Google’s data should align with earlier research, which considered total water consumption. Ren maintains Google’s current reporting does not meet the standard expected for technical research, and discrepancies between Google’s figures and those of previous studies could point to oversights in method.

While Google’s numbers show a substantial improvement in AI’s water footprint, critics emphasize the importance of evaluating the complete environmental cost to make a fair assessment.