A U.K. woman was photographed standing in a mirror where her reflections didn’t match, but not because of a glitch in the Matrix. Instead, it’s a simple iPhone computational photography mistake.
This story may be amusing, but it’s actually a serious issue if Apple is doing this and people are not aware of it because cellphone imagery is used in things like court cases. Relative positions of people in a scene really fucking matter in those kinds of situations. Someone’s photo of a crime could be dismissed or discredited using this exact news story as an example – or worse, someone could be wrongly convicted because the composite produced a misleading representation of the scene.
I see your point, though I wouldn’t put it that far. It’s an edge case that has to happen in a very short duration.
Similar effects can be acheived with traditional cameras with rolling shutter.
If you’re only concerned of relative positions of different people during a time frame, I don’t think you need to be that worried. Being aware of it is enough.I don’t think that’s what’s happening. I think Apple is “filming” over the course of the seconds you have the camera open, and uses the press of the shutter button to select a specific shit from the hundreds of frames that have been taken as video. Then, some algorithm appears to be assembling different portions of those shots into one “best” shot.
It’s not just a mechanical shutter effect.
I’m aware of the differences. I’m just pointing out that similar phenomenon and discussions have been made since rolling shutter artifacts have been a thing. It still only takes milliseconds for an iPhone to finish taking it’s plethora of photos to composite. For the majority of forensic use cases, it’s a non issue imo. People don’t move that quick to change relative positions substantially irl.
Did you look at the example in the article? It’s clearly not milliseconds. It’s several whole seconds.
You don’t need a few whole seconds to put an arm down.
Edit: I should rephrase. I don’t think computational photography algorithms would risk compositing photos that are whole seconds apart. In well lit environments, one photo only needs 1/100 seconds or less to expose properly. Using photos that are temporally too far apart risk objects moving too much in the frame, and thus fail to composite.
There’s three different arm positions in a single picture. That doesn’t happen in the blink of an eye.
The camera is taking many frames over a relatively long time to do this.
This is nothing at all like rolling shutter, and it’s very obvious from looking at the example in the article.
Those arm positions occur over the course of a fluid motion in a single second. How long does it take for you to drop your hands to your side or raise them to clasped from the side? It doesn’t take me more than about half a second as a deliberate movement.
It takes you several seconds to move your arm? I hope you don’t do manual work.
Also did you use the iOS camera app before? You can see how long it takes for the iPhone to take multiple shots for the always-on hdr feature, and it isn’t several seconds.
There’s three different arm positions in a single picture. That doesn’t happen in the blink of an eye.
It’s a lot faster than you might be expecting. I found it helps to visualize it in person. Go to a mirror and start with your hands together like in the right side mirror. Now let your arms down naturally, to the position in the left side mirror. If you don’t move your arms at the same exact time, one elbow will still be parallel to the floor while the other elbow has extended already, just like in the middle position.
Thus, we can tell that the camera compiled the image from right to left.
I can also see the three arm positions being a single motion, just in three different time frames. If it really takes seconds to complete a composite, then it should also be very easy to reproduce, and not something so rare it makes it into the news. If I still can’t convince you, I guess we agree to disagree then.
This isn’t an issue at all it’s a bullshit headline. And it worked.
This is the result of shooting in panorama mode.
In other news, the sky is blue
Like, an episode of Bones or some shit.
Uhm, ok?
The way the girl’s post is written, it’s like she found out Apple made camera lenses from orphans’ retinas (“almost made me vomit on the street”). I assumed it was well known that iPhone takes many photos and stitches the pic together (hence the usually great quality). Now the software made a mistake, resulting in a definitely cool/interesting pic, but that’s it.
Also, maybe stop flailing your arms around when you want your pic taken in your wedding dress.
When have panorama photos ever not done weird stuff?
It’s a really cool discovery, but I don’t know how Apple is suppose to program against it.
What surprises me is how much of a time range each photo has to work with. Enough time for Tessa to put down one arm and then the other. It’s basically recording a mini-video and selecting frames from it. I wonder if turning off things like Live Photo (which retroactively starts the video a second or two before you actually press record) would force the Camera app to select from a briefer range of time.
Maybe combining facial recognition with post processing to tell the software that if it thinks it’s looking at multiple copies of the same person, it needs to time-sync the sections of frames chosen for the final photo. It wouldn’t be foolproof, but it would be better than nothing.
Program against it? It’s a camera. Put what’s on the light sensor into the file, you’re done. They programmed to make this happen, by pretending that multiple images are the same image.
That’s over simplified. There’s only so much you can get on a sensor at the sizes in mobile devices. To compensate there’s A LOT of processing that goes on. Even higher end DSLR cameras are doing post processing.
Even shooting RAW like you’re suggesting involves some amount of post processing for things like lens corrections.
It’s all that post processing that allows us to have things like HDR images for example. It also allows us to compensate for various lighting and motion changes.
Mobile phone cameras are more about the software than the hardware these days
With a DSLR, the person editing the pictures has full control over what post processing is done to the RAW files.
deleted by creator
Correct, I was referring to RAW shot on mobile not a proper DLSR. I guess I should have been more clear about that. Sorry!
Raw files from cameras have meta data that tells raw converters the info of which color profile and lenses it’s taken with, but any camera worth using professionally doesn’t have any native corrections on raw files. However, in special cases as with lenses with high distortion, the raw files have a distortion profile on by default.
Correct, I was referring to RAW shot on mobile devices not a proper DSLR. That was my observations based off of using the iPhone raw and android raw formats.
This isn’t my area of expertise so if I’m wrong about that aspect too let me know! 😃
Oh, so you have no idea what you’re talking about.
So what was I wrong about? I’m always happy to learn from my mistakes! 😊
Do you have some whitepapers I can reference too?
How about a couple decades of industry experience instead?
Gonna provide more information or is this just a trust me bro situation?
Not sure what I’d have to gain from just lying on the Internet about inconsequential things.
Also not sure I can disclose too many technical details due to NDAs, but I’ve worked on camera stacks on multiple Android-based devices. Yes, there’s tons of layers of firmware and software throughout the camera stack, but it very importantly does not alter consequential elements of images, and concentrates on image quality, not image contents.
While the sensors in smartphones might not be as physically large as those in DSLRs - at least, in general - there’s still significant quality in the raw sensor data that does not inherently require the sort of image stitching that Apple is doing.
Oh, so your excuse is you are illiterate?
🙄
Edit: oh, you’re the actual illiterate person from another post. Thanks for stalking me.
You think too highly of yourself.
When you comment spam just about every thread you’ll come across people multiple times.
What’s on the light sensor when? There’s no shutter, it can just capture a continuous stream of light indefinitely.
Most people want a rough representation of what’s hitting the sensor when they push the button. But they don’t actually care about the sensor, they care about what they can see, which doesn’t include the blur from the camera wobbling, or the slight blur of the subject moving.
They want the lighting to match how they perceived the scene, even though that isn’t what the sensor picked up, because your brain edits what you see before you comprehend the image.Doing those corrections is a small step to incorporating discontinuities in the capture window for better results.
Or maybe just don’t move your arm for literally less than a second while the foto(s) is/are taken… Moving your arm(s) down happens in less than a second if one just let them fall by gravity. It’s a funny pic nonetheless.
I may have missed this in the comments already but it is really important to note here that the article says the photo was taken using panorama mode, which is why the computational photography thing is even an issue. If you have used panorama mode ever you should go in expecting some funkiest, especially if someone in the shot is moving, as the bride apparently was when it was shot.
Stop posting apple advertisments.
There’s a note at the end of the article that says it was take using pano. So this is doubly unsurprising. Despite the instagram caption reading it wasn’t.
deleted by creator
MKBHD made an interesting video about this already a year ago:
This person is an actress and comedian. This is not an iPhone error; it’s just a manually-edited photo from three separate takes that she pretended came out of the phone as-is. It’s a hoax for laughs/attention.