I'm going to use my rig as an example, because, well, I did this math already. But that's ok because you can use use it to get some idea of things. I'll give you the math if you're really inspired to scribble on a big chalkboard and draw diagrams to impress your ladyfriend (or gentleman friend...and.... like other species geeks and nerds come in a variety of standards and even non-discrete units- gender/sexuality can't be determined by Millikan oil drop).
The non-variables:
- I have a 6" f/4 telescope with optics of unknown quality
- Through these optics, each pixel represents 1.46 arc-seconds of sky/star
- My tracking scope (ST80) and its little camera reproduce 1.93 arc-seconds per pixel
- I have some messy data from my tracking scope in operation (used below)
- The angular diameter of Sirius is 0.006 arc-seconds
I obtained the arc-seconds/pixel numbers for the two scopes by plate solving images taken from them using the wondrous tools at nova.astrometry.net.
So very simply, if the angular diameter of Sirius is less than one pixel of the sensor, and if everything was amazingly perfect it should look like this when magnified:
Not real life |
If you claim to have seen such a thing you're a liar and a scoundrel.
Any telescope's resolution is limited by the physics of light, simply because the quality of an optic, and how cleanly it reflects is proportional to the wavelength of light itself. Even if the optic is beyond the light's wavelength, the pinpoint of light will reproduce a central disk with rings of interference around it in a ratio of 84% in the middle, 16% in the rings. How long you expose the image will determine how bright the rings are, up to a point that they no longer appear as rings on the sensor and are instead a larger disk. This is why images of bright stars fill more pixels than dim ones. The size of this disk is determined by the diffraction limit, which is:
1.22x wavelength(cm)
----------------------
diameter(cm)
in radians. So light somewhere in the middle of the spectrum: 0.00005cm and diameter of 15.2cm, we get 0.83 arc-seconds. Astro-Tech lists the scope's resolution as 0.76 arc-seconds. Isn't that interesting? At any rate, that's the area of the central disk. So in theory a short enough exposure would still render a single pixel. The diffraction rings, which look a bit like this:
Not to scale with the other fake pixels |
would render as pixels something like this with sufficient exposure:
Real life if you're in space |
That's looking more like a star in a telescope like we're used to. This takes care of your Dawes numbers, Raleigh, or whatever else you subscribe to. Don't get too picky on the differences between those, we're taking pictures from the bottom of a deep pool.
Being in Florida I'm looking through about 30 feet of water in a best case scenario. It's a swampy, swampy state with a lot of dense, wet air starting at sea level. This produces "seeing" quality issues. The dense/wet air refracts light the same way that a glass (refactor) telescope does, except that it is constantly shifting with air currents, hundreds of times per second. If I were on Mauna Kea again that would distort the location of a given star (and its diffraction rings) by about 0.4 arc-seconds on a good night. Here? It's probably 2 arc-seconds on a good night, and likely 3 most of the time. Let's go with 2.5, or 1.7 pixels. Yes, I'm skipping over the concept of FWHM here because it's a calculus problem and you don't really need it for this sort of back-of-the-envelope look at things. Maybe another time. New image:
Scuba/Swamp Vision |
That's basically my expected detail if I get everything else completely right with a significant exposure. Shorter exposures of course could render a smaller image. In fact if the exposure was shorter than the frequency of eddie currents causing the seeing conditions (and I got lucky with a current that got very little distortion), and short enough to not expose the diffraction rings it could take up a single pixel. Instead, this is what I've got:
This should really only take up 4 or so pixels |
The shift of colors from red to blue tells me there's chromatic aberration, and because I can see it on other more significant things in that image comatic aberration as well. How do I know the stretch isn't tracking?
PHD2 unfortunately doesn't like my camera and won't let me put in a pixel size value for it, so it only reports deviation in pixels. Since I was able to plate solve an actual image from the camera though, it's easy math (arc-seconds = 1/(pixels per arc-second*deviation)):
- Average deviation of 0.45 pixels = .87 arc-seconds
- peak deviation of 1.5 pixels = 2.95 arc-seconds
Those are for the ST-80, so arc-seconds being the common here that means going the other way for for the AT6IN/Fuji combo:
- average deviation of .87 arc-seconds = ~0.6 pixels
- peak deviation of 2.95 arc-seconds = ~2 pixels.
The image is stretched over at least 3 pixels, so the only other candidate beyond my optical issues is focus. I'm focusing using a Bahtinov mask, which results in one of those scientifically accurate levels of focus that I can't determine by looking at pixels alone.
So...collimation, optics, aberration.
For reference, here's a bright star with a longer exposure. It's a little harder to tell what's going on there, but you get the idea:
Astro-probs. |
No comments:
Post a Comment