Flat Earthers like to claim that the "light in the sky" we're calling the ISS is actually some other object that is flying significantly lower. It kind of makes sense - if the Earth is flat and space doesn't exist, satellites can't be a thing either, so the ISS must be something else, like a balloon or a drone. The possibility that people constructed something 400 km above the Earth's surface is in direct opposition to their worldview, for example because data from such an object directly proves that there is actually a vacuum there, and that the Earth is round.
So if there was a simple way of checking that the ISS is actually flying at an altitude of 400 km, it would be a hard blow to all kinds of flat Earth claims. Luckily for us, a fairly simple method exists. I suggested it a few years ago to a flat Earther, and eventually used it in practice in 2020. Here is how I did it.
How to check where the ISS is?
The idea is very simple. You just need to make an observation of an ISS pass from two or more places simultaneously. The ISS will be visible in slightly different direction in each of the places (this is essentially just parallax). Knowing the positions of the observers and the directions from them to the ISS, it is possible to determine the position of the space station.
OK, sounds cool, but how do you measure a direction, and what does it even mean to "measure a direction"? There are multiple ways, but in this case the simplest one is to measure two angles: azimuth and elevation (angular altitude above horizontal). The azimuth is a horizontal angle between the direction to the north and the point on the horizon directly below the observed object. Conventionally, east is expressed as an azimuth of 90°, south is 180°, west is 270°, and other numbers denote directions in between. The elevation, on the other hand, is the angle between the horizontal plane and the direction to the object. Measuring these two values will be enough to establish the exact direction to the ISS.
How to measure these angles? Traditionally, azimuths can be measured eg. with a compass, and elevations with a sextant, but... if you have ever seen an ISS pass, then you know that the ISS moves across the sky rather quickly. It means that it would be quite hard to measure the desired angles at some precise moment. We need a better way.
Since the main problem is the motion of the ISS, things would become much simpler if we could eliminate it somehow. This can be done very easily - by just taking a photo! The station will be motionless in the picture, we just need a way of reading the azimuth and elevation from it.
And how do we do that? We need some reference points. There are some very convenient reference points in the sky: the stars. We could measure the positions of the stars with a compass and a sextant, but we can also take a shortcut. Since these positions are very well known and it is common knowledge what the elevation and azimuths of specific stars are at any given moment (to the point that makes celestial navigation possible), we can just use some kind of an almanac or something equivalent, like Stellarium. That will tell us what were the azimuths and elevations of selected stars at the moment of taking the photo of the ISS.
The last issue to overcome is that in order for the stars to become visible in a photo of the ISS, a long exposure time is needed. That will make the station move a significant distance across the sky while the photo is being taken, which means that it won't be a single point in the picture, but a line. This is not a big problem, though: if we only know the times when the exposure started and finished, we can get the positions of the ISS at those moments just by looking at the ends of the line.
So now we have a way of getting the azimuth and elevation of the ISS as seen from some position, based on a photo. These two angles together with the observer's position determine some line in space - a line of possible positions of the ISS! Knowing more than one such line, we can find where they intersect, which will be the actual position of the ISS in space. Doing this for the points at both the start and the end of the exposure, we will get the station's positions at two distinct moments in time, which will allow us to calculate its speed, too. All that's left now is to actually do it :)
I made a few observations together with two friends, I'll focus on one of them here as an example. We took the photos below on May 16th, 2020, 22:45:25 UTC (May 17th 0:45:25 Polish time) with 30 seconds exposures - me in Katowice, one friend near Łódź, and one friend near Mielec, all places in Poland.
For starters, I'll show you how to get the azimuth and elevation of the start and end of the ISS trail based on the photo from Katowice.
The first step is to identify some stars. Below you'll find the photo with some brighter stars and constellations labelled:
I chose five of those stars for further processing: Cor Caroli, Megrez, Merak, Eltanin and Polaris. It's completely possible to use more stars - 5 are just a reasonable minimum, and I didn't want to spend too much time on the analysis, but generally the more, the better. I chose these five because they are fairly widely spread in the frame and clearly visible.
The next step is to write down the coordinates of the chosen stars. We need two sets of coordinates: x and y from the photo (you can find them using a graphical program like GIMP), and azimuth and elevation in reality. I got the last two from Stellarium:
The data from this photo looks like this:
1073 2655 306.3924 49.7092
1372 1930 302.7408 59.5692
4411 121 73.4577 68.8078
# Cor Caroli
200 630 263.5183 58.6198
3589 2949 0.1895 49.6069
Each row is x and y of the star in the photo, and then azimuth and elevation.
To get the azimuth and elevation of the ISS, we also need the x and y of the positions of the ISS in the photo - in this case, these are the following:
I processed the other two photos in the same fashion. After performing these steps we have all the necessary data, the only thing that's left is to do the math!
Field of view
The first thing we need to do is to find the field of view of the camera. This will let us translate the pixels in the image into angles in reality.
In order to do that, we will start with a model of how photos are created. We will assume a coordinate system with the axis along the optical axis of the camera, the axis representing the left-right direction relative to the camera, and the axis along the up-down direction, again relative to the camera. "Relative to the camera" means that if we, for example, laid the camera on its side, then the axis would be de facto vertical, and the axis would be horizontal.
We will assume that a point in the photo corresponds to a direction in space represented by a vector in the coordinates introduced above , where:
with meaning the width of the photo in pixels, - the height of the photo (also in pixels), and - a value related to the horizontal field of view of the camera. Specifically, since our assumptions mean that the point in the image is exactly on the optical axis of the camera, and the point is at the edge of the frame, we have:
where is the camera's horizontal field of view expressed as an angle.
is a value we want to determine. How do we do that?
Let's assume two stars in the photo and denote their azimuths and elevations respectively for the first star, and for the second star. Azimuth and elevation of correspond to the following vector in space:
in coordinates in which the axis is directed to the north, the axis is to the east, and the axis is upwards.
Based on such vectors corresponding to two stars, we can calculate their angular distance in the sky, which we will denote :
On the other hand, if we assume some value of , we can calculate the angular distance between the stars based on the photo:
We can calculate such distances - real ones, and ones based on the photo - for multiple pairs of stars and then check how much the distances based on the photo (with some assumed ) differ from the real ones. Then we can modify accordingly, check the difference again, and repeat the process until the distances calculated from the photo will be as close as possible to the real ones. This process is best done using a computer, which is an ideal tool for performing lots of repetitive calculations - which is what I did, using the function scipy.optimize.least_squares from the SciPy library for the Python language.
Performing the procedure described above on the data written down from my photo yields the following result:
which, with the image width being 6000 pixels, corresponds to a horizontal field of view of approx. 74.5°. I took the photo using a lens with a 35 mm equivalent focal length of 24 mm, which gives a theoretical value for the horizontal field of view of 73.7°.
The orientation of the camera
The next step is to calculate the camera's orientation in space. We will express the orientation using Euler angles in a coordinate system in which the axis is directed to the north, the axis is upwards, and the axis is to the east. You may have noticed that this system resembles the one we used earlier for calculating the angular distances between stars, but with reordered axes. Why did I choose such a system? The reason is simple: this system is oriented in such a way that when the camera's axes are parallel to the axes in this system, we have a camera that is perfectly level and looking to the north. It will be convenient to have such an orientation correspond to angles of , and any other orientation to correspond to nonzero Euler angles.
But I'm getting a bit ahead of myself. Let's maybe start with a few words about Euler angles. As we can read on Wikipedia, it is a set of 3 angles describing an orientation of a rigid body, or a coordinate system, relative to another coordinate system. In short: these angles tell us how we have to rotate a coordinate system around some 3 specific axes in order to have the axes of this system become parallel to the axes of another system.
In our case, we can imagine these angles this way:
- We start with a camera that is level and looking to the north.
- We rotate the camera by angle around the vertical axis. The camera is still level, but is now looking in a different direction - specifically, in a direction with an azimuth equal to .
- We rotate the camera by angle around its left-right axis. This means that the camera's horizontal axis will still be horizontal, but the camera is no longer looking horizontally, but towards a point with an elevation of .
- We rotate the camera by angle around its optical axis. Now the camera is still looking at the same point, with an azimuth of and elevation of , but its left-right axis might no longer be horizontal.
By choosing the right values of these 3 angles, we can express any orientation of the camera in space. What's left is to calculate what the angles were in the case of my camera when I was taking my photo.
The method I used is identical to the one I described for calculating the field of view. I assume some values for the Euler angles and check whether the calculated positions of the stars in the photo are the same as their actual positions. If not, I'm modifying the angles until I get as close as possible to the correct result - again with the help of scipy.optimize.least_squares.
Specifically, the maths looks like this - for every star in the photo, I'm performing the following steps:
- Calculate the vectors and (which are vectors corresponding to the direction to the star relative to the camera, and to the direction in reality based on the star's azimuth and elevation, in the coordinate system described earlier). In addition, normalize the vector (which just means dividing it by its magnitude, so that the magnitude of the result is 1 - the vector has a magnitude of 1 from the start).
- Transform the vector according to the Euler angles, denote the result by .
- Calculate the error for this star, equal to
The sum of all the errors for all the stars is then taken to be a measure of the error in fitting the angles. The program then aims to minimize this error, which makes it find the right set of Euler angles.
After applying this to my photo, we get:
Found Euler angles:
It would mean that the camera looked almost directly to the north (sounds about right, you can see that in the position of Polaris), was almost level (low angle, also sounds right) and was looking at an angle of roughly 64° upwards (which also sounds right - although note that we obviously have the sign of the angle reversed, but that's just due to some mistake in defining the axes).
The azimuth and elevation of the ISS
The next step is to find the azimuth and elevation of the ISS. This step is fairly simple at this point - having and the Euler angles, we can just take the ISS coordinates from the photo, convert into a vector , normalize, transform using the Euler angles and calculate the azimuth and elevation from the resulting coordinates like so:
To be more specific, in code it is best to use another function for azimuth, commonly called "atan2". Regular inverse tangent (atan, arcus tangens) can only have a value between -90° and 90°, which doesn't cover the full 360° range of azimuths. "atan2" is a two-argument inverse tangent which takes into account the signs of the numerator and denominator, and thanks to this it can give a value between -180° and 180°.
Performing these calculations on the data for the starting point of the ISS trail from my photo yields the result:
Azimuth and elevation of the ISS: 329.3345431204687 62.77086064445405
According to Stellarium, the azimuth and elevation of the ISS at that moment should have been 329.3554 and 62.7508. I'd say that's a fairly satisfying accuracy :)
The last step - intersecting the lines
Having the azimuths and elevations of the starting and ending points of the ISS trail in the photo, we now just need to find the point in space where the lines of sight of different observers intersect.
However, we need to take into account that the azimuths and elevations are directly related to local directions like vertical or north. Specifically, if we express the direction as a vector, then the azimuth and elevation correspond to the following one:
The vectors , and are some vectors corresponding to local directions to the north, east and upwards. Their exact representations in a coordinate system can depend on the system we use (we actually had an opportunity to see that before, when we used a slightly different system for finding the Euler angles than the one for angular distances between the stars).
In order to find the point of intersection of the lines of sight determined by each photo, we need to express all the positions and directions in a single global coordinate system.
This is where one more complication arises - for the purposes of arguing with flat Earthers, I will want to perform the calculations for both flat and globe Earth. This means that I will have to introduce two separate coordinate systems.
Let's start with the case of flat Earth, because it's the simpler one. We will assume that the Earth is a disk in the plane with the axis passing through Greenwich, and the North Pole being at the point . We will assume the axis to be perpendicular to the disk. Moreover, we will assume that latitude corresponds to the distance from the pole to the given point. Specifically, latitude and longitude will correspond to the following coordinates in space:
(The coordinates are expressed in kilometers, and we assume that the observer is at sea level - which introduces some inaccuracy, but a small one, in reality all of us were less than 300 meters above sea level.)
The direction vectors will then look the following way:
For a globe Earth, we will introduce coordinates in which is the Earth's center, the axis is passing through the poles, and the axis passes through the intersection of the equator with the Greenwich meridian. Assuming , we can express the position of the observer in the following way:
The directions will look like this:
Fitting the position
We finally reach the stage where we fit the position of the ISS. "Fit", because the method will be similar to the one for the field of view and Euler angles - we will choose some coordinates , calculate some measure of error, and then we'll tell the computer to find the coordinates that minimize the error.
For the measure of error, we will choose the quadratic average of the distances of the position from the lines of sight determined from the photos. Let me explain.
Having a line defined by some point and direction (we also assume that the magnitude of is 1), we can calculate the distance of point from this line the following way:
- Calculate the vector from to : .
- Find the projection of onto , which is:
- The difference between and its projection on is the component of that is perpendicular to . This component is a vector perpendicular to our line, from some point on the line to point - the magnitude of this vector is the distance we're looking for.
So, the distance of the point from the line is equal to:
In our case, point is the assumed position of the ISS, point is the observer's location, and vector is the vector determined by the azimuth and elevation of the ISS and the position of the observer in the global coordinates.
For the -th observer, we calculate : the distance of the ISS from the line determined by that observer's photo, and then we calculate the average distance of the ISS position from the lines:
We consider ISS to be at the position that minimizes this average. If all the lines intersect at a single point, this average will then be 0 - but since our measurements aren't perfectly accurate, chances of this happening are rather low.
By the way, note that I didn't use the regular, arithmetic mean here, but something called the "quadratic mean". Why?
Assume we have two lines and that the distance between them at the closest point is, say, 6 km. If we take a point on that 6 km line segment, between the two lines, it will be at a distance of from one of the lines, and from the other one. The "normal" mean of these two distances is then . It doesn't matter which point between the lines we choose, the average distance will be 3.
It is different with the quadratic mean. This mean will be . This number depends on and is smallest when is 3. In other words, if we aim to minimize this mean, we will find the point precisely halfway between the two lines. If we used the arithmetic mean, we could get any point on the 6 km line segment between the two lines.
The speed of the ISS
The last piece of information we can extract from the photos is the speed of the ISS - specifically, the approximate average speed during the 30 seconds of exposure. How to find it? Just calculate the position of the ISS at the beginning of the exposure, then the position at the end, calculate the distance between the two positions and divide by 30 seconds. Done.
Let's finally take a look at the results!
For the 3 photos presented above, the result looks like this:
Average distance from lines of sight: 8.02 km
2020-05-16 22:45:25: h = 410.98 km, lat = 51.7392, lon = 17.5184
Average distance from lines of sight: 3.38 km
2020-05-16 22:45:55: h = 422.20 km, lat = 51.7000, lon = 20.7150
Speed: 7829.6 m/s
Average distance from lines of sight: 5.72 km
2020-05-16 22:45:25: h = 390.90 km, lat = 51.7581, lon = 17.6299
Average distance from lines of sight: 3.55 km
2020-05-16 22:45:55: h = 407.19 km, lat = 51.7310, lon = 20.6846
Speed: 7573.2 m/s
The average distances listed here are the measures of error I described above.
As you can see, regardless whether we assume the Earth to be spherical or flat, the result is that the ISS - whatever it is - moves at an altitude of approximately 400 km with a speed exceeding 7 km/s. There are no balloons or drones that can do that. The only reasonable explanation is simple: ISS is in fact a space station, orbiting the spherical Earth.
I need to point out one more thing here: somebody will definitely notice that the average distance from lines of sight is smaller for flat Earth than for the globe, at least for the starting position. One could be tempted to consider this evidence for the flatness of the Earth, but I must disappoint those who were about to claim this: the difference here is much too small to mean anything. The uncertainty of the directions calculated from the photos alone most likely causes differences larger than these few kilometers. If the difference was large, like a few km in one case, and a few tens of km in the other, then it would be something to be discussed. But with these results, any conclusions based on that would be premature.
By the way, it might be possible to use the observations of the ISS to distinguish between a spherical Earth and a flat Earth with data collected from a larger area. We were observing the ISS from a triangle with sides of 170-200 km, when it is visible at any moment from a circle with a radius of roughly 1400 km. If more people were involved, and were taking observations of the ISS from places a few hundred km from each other, such that some would be seeing it near the zenith, and some near the horizon - it could turn out that one of the shapes of the Earth fits the data much better than the other one.
Measuring the position of the ISS turned out to be great fun, and the accuracy of the results exceeded my expectations :)
We can repeat these observations. If you're interested in doing that, come to my Discord server, where we can coordinate the undertaking (but beware: there are mostly people arguing with flat Earthers on YouTube there, so if you come to the server, don't be surprised ;) ). The more people are photographing the ISS simultaneously, the better the result, so feel invited to help :)
I suspect that flat Earthers will remain unconvinced, despite fairly conclusive results about the nature of the ISS - but that's not what this was about. It was about good fun and seeing what kind of results can be obtained, and these goals were fully accomplished :)