The "Internet" is a broad term and is more than just streaming. The transmission of data takes very little power so it would be inaccurate to equate the percentage of streaming video to percentage of energy used.
I don't know how these analysis are done, here is one:
https://www.iea.org/commentaries/the-carbon-footprint-of-streaming-video-fact-checking-the-headlines"Based on average viewing habits, my updated analysis shows that viewing devices account for the majority of energy use (72%), followed by data transmission (23%) and data centres (5%). "
https://www.interdigital.com/white_papers/the-sustainable-future-of-video-entertainment"An 8K TV uses more than twice as much electricity than a 4K TV, which shows in energy bills; however, users are unaware that this accounts for 108gCO2e per hour of emissions, 2.6 times higher than the equivalent 4K set. To put this into context, for the expected 30 million 8K TVs that will be installed by 2023, energy consumption for video streaming will be 50% higher than 343 million tablets used worldwide. As consumer engagement with sustainability increases, these choices will attract greater scrutiny."
A lot of streaming content is no doubt viewed on smaller devices. But then I have to really ask whether the increased resolution gives sufficient benefits to justify the costs.
I think it is a difficult thing to sort out the environmental costs. In the past, people used to drive to video rental stores to pick up a movie, or sit in a large airconditioned theatre.
Yes, but people did this infrequently whereas today many people "binge-watch" series on streaming services (i.e. watch the whole season in one go, for example). The increased availability of content has lead to increased consumption. This probably has disadvantages to health as well.
If streaming video (such as conference calls) removes the need for air-travel, then again the reductions are clear.
Sure, video conferencing is a good idea for long-distance communications. However, these applications use very low-quality video. Zoom transmits about 1GB per hour for video while Netflix uses 25 Mbps for 4K (11 GB/h), and I would imagine 8K would use even more than that. 4K optical discs have 82-128 Mbps bit rates (37-58 GB per hour), and if the bandwidth were available, people would likely choose to have this higher quality for streaming as well. From what I can see, the perceived image quality in video is more related to the bandwidth used than resolution, but nevertheless resolution seems to have great marketing value.
The fact that people do video conferencing despite the poor quality in my mind suggests that most people simply don't care a lot about the quality of the video, they are more interested in having a quality discussion with friends and colleagues.
Making some simple assumptions a one hour flight uses the same amount of energy per user as two years of video conferencing.
Again, I am not against video conferencing. I am merely questioning whether increased video quality (from 4K upwards) in entertainment is really beneficial and if it becomes widely used, it might increase the environmental costs and not really provide much in terms of increased substance. I've read is that today computer-generated images in movies are generated at 2K resolution and not 4K or 8K, and that the most commonly used movie cameras in Hollywood produce 2.6K files. Rendering computer-generated images at higher resolutions than 2K would apparently be too slow and/or costly. I really see no one complaining about the 2K CGI in movies. So why does the original video footage now at the consumer level have to be shot at 8K? I just don't get it.
The point of window sealing is to give an order of magnitude about how much energy is being expended. My contention is that other simple things that can be done are more impactful than restricting resolution on digital devices.
I'm not interested in "restricting" resolution of digital devices (e.g., by a regulation) but I am questioning whether it makes sense to pay for hardware to allow 8K video acquisition at the camera and lens level, then the additional processing hardware and storage needed to make editing and storage possible and practical, and finally to distribute the content at higher resolutions (and for consumers to buy wall-sized screens so that they can see the details). I'm questioning whether this added expense and effort will actually provide benefits in proportional to the cost (including monetary cost, extra time used, and environmental cost)̣ and whether instead we should focus efforts somewhere else. For example, making more interesting content rather than just higher resolution. Already resolution and image quality of video is very high.