diff --git a/_posts/2024-04-01-april-fools.md b/_posts/2024-04-01-april-fools.md
deleted file mode 100644
index 75b54a1..0000000
--- a/_posts/2024-04-01-april-fools.md
+++ /dev/null
@@ -1,3 +0,0 @@
-test post
-
-hi
diff --git a/_posts/2024-04-01-experiment.md b/_posts/2024-04-01-experiment.md
new file mode 100644
index 0000000..cbd0914
--- /dev/null
+++ b/_posts/2024-04-01-experiment.md
@@ -0,0 +1,6 @@
+time to try something a bit different. up till now, this project has mostly been about code - at first cpp, then mixed cpp and py, and recently all three side by side in parallel - py, cpp, and mixed. that could evolve and grow now to encompass something wider. part of what's bringing this about is a better understanding that there's two tools in hand now -
+
+- sphinx - this is about python. with python, it's standard to do docstrings within the py code files. sphinx is simply a way to convert those into a static html website.
+- jekyll - this is effectively about github. with github, it's standard to do markdown files within the project. jekyll is simply a way to convert those into a static html website.
+
+project starid has been doing sphinx 'docsascode/readthedocs' for years. what's new is - experimenting with a jekyll 'project website' as well. so two separate websites flowing out of the project - 'docsascode/readthedocs' via sphinx, 'project website' via jekyll.
\ No newline at end of file
diff --git a/about.md b/about.md
index 265e94d..1201941 100644
--- a/about.md
+++ b/about.md
@@ -2,8 +2,7 @@
layout: page
title: "about"
---
-
-this's been a hobby project for well over three decades now. here's some notes around how that all came about.
+this has been a hobby project for well over three decades now. here's some notes around how that all came about.
the 'lost in space' problem - [photos](https://photos.app.goo.gl/ifuTJUNsaRJK21E79){:target="_blank" rel="noopener"}
--------------------------
@@ -30,7 +29,7 @@ reflected sunlight from asteroids is dim because of their small size and distanc
1990 - [photos](https://photos.app.goo.gl/vKBxieTbwsbmshCg8){:target="_blank" rel="noopener"}
-------------------------------------------------------------
-summer of ninety - university of texas at austin astronomy department - recently the hubble space telescope had finally reached orbit and the berlin wall had fallen - rent was less than two hundred a month, just a short walk north of campus - martha ann zively, the eighty-three year old landlady, lived directly overhead, and mobile phones, notebook computers, and the web were all somewhere over the horizon - home internet was a dialup modem into a university access point.
+summer of ninety - university of texas at austin astronomy department - recently the hubble space telescope had finally reached orbit and the berlin wall had fallen - rent was less than two hundred a month, just a short walk north of campus - the eighty-three year old landlady and friend of lbj and john connally, mrs zively, lived directly overhead. mobile phones, notebook computers, and the web were all somewhere over the horizon - home internet was a dialup modem into a university access point.
since the previous fall, work meant the hubble space telescope astrometry team - a group with members from the astronomy department, mcdonald observatory, the aerospace engineering department, the center for space research, and the european space agency and its hipparcos project - paul hemenway, an astronomer involved with all those organizations, was mentor and friend. life on the top three floors of rlm, the physics math astronomy tower, was special. high above the green treetops of austin, looking west to the hill country, it was a quiet and mellow time. a hippy vibe permeated the scence. wheatsville food coop, a few blocks away, was still in the seventies, which weren't so long ago after all.
@@ -46,46 +45,113 @@ an observing night began a few hours before sunset. down in the telescope contro
a stack of white envelopes, each containing a glass photographic plate, waited on a desk - to prepare, a control room terminal with a command line program on the nova generated telescope pointing information for a list of asteroids. using this mini was probably the last serious contact with the large eight-inch floppy disks - they were a vanishing breed by the late 80s. after jotting down notes for the planned observations, the plates were moved up into the dome, where it was pitch black except for clouds of stars in the open slit - the telescope loomed overhead in the darkness.
-caution was required climbing up the stairs onto the circular telescope floor - it rose and descended in order to stay near the camera as the telescope moved - one could easily step off in the dark, high above the dome floor. the massive base of the telescope and attached camera hung at eye level - sliding out the plate holder cover opened a rectangular frame of stars, with the silhouette of the telescope secondary mirror housing and its support struts high above. mcdonald maintenance staff had mounted the camera and connected power cables, but fine tuning was always needed, and the telescope itself had to be focused - this meant adjusting the position of the secondary mirror. a rocker switch on the telescope hand controller activated a motor to move the secondary inward or outward - the exact determination of focus was old-school, using a knife-edge. in the telescope’s focal plane, all of the light from a star converges through a single point. when a knife-edge cuts through that point, the light from the star is cut off instantly - if the knife-edge dims the star gradually, then the secondary mirror position needs to be adjusted. the point of instant cut off needed to be where the photographic plates were held by the camera, so a special metal frame mounting a straight knife-edge was fastened into the plate holder, for adjusting the secondary mirror position while watching the knife-edge. if there was a bit of spare time, the metal frame could be replaced with another special frame holding the eyepiece, a glass lens heavy enough to require both hands to lift - peering inside, one saw directly a mysterious world of red or green nebulas or spiraling galaxies...
+caution was needed climbing up the stairs onto the circular telescope floor - it rose and descended in order to stay near the camera as the telescope moved - it was easy to take a step onto thin air in the dark, high above the dome floor. the massive base of the telescope and attached camera hung at eye level - sliding out the plate holder cover opened a rectangular frame of stars, with the silhouette of the telescope secondary mirror housing and its support struts high above.
+
+mcdonald maintenance staff had mounted the camera and connected power cables, but fine tuning was always needed, and the telescope itself had to be focused - this meant adjusting the position of the secondary mirror. a rocker switch on the telescope hand controller activated a motor to move the secondary inward or outward - the exact determination of focus was old-school, using a knife-edge.
+
+in the telescope’s focal plane, all of the light from a star converges through a single point. when a knife-edge cuts through that point, the light from the star is cut off instantly - if the knife-edge dims the star gradually, then the secondary mirror position needs to be adjusted. the point of instant cut off needed to be where the photographic plates were held by the camera, so a special metal frame mounting a straight knife-edge was fastened into the plate holder, for adjusting the secondary mirror position while watching the knife-edge.
+
+if there was a bit of spare time, the knife-edge frame could be replaced with a frame holding 'the eyepiece'. this was a glass lens about the size of a small flower vase and heavy enough to require both hands to lift. peering inside, the eye saw vivid red or green colors in nebulas - much like the popular photos.
+
+once the telescope was ready, camera configuration came next. with the small field of view of the telescope - effectively a high magnification - asteroids moved significantly, relative to the background stars, over an interval of around ten minutes - each asteroid was a bit different, and various orbital characteristics had to be taken into account - the direction and rate of relative motion had already been computed by the minicomputer - now the camera body was rotated in its mounting, relative to the telescope, and programmed to move at the apparent rate of the target asteroid, so it would appear to be motionless.
+
+the rear-surface of the image dissector was a round crt screen divided by lines into four quadrants. light from a star, cascading down through the photomultiplier tube, formed a green glow on the screen. a guide-star was found near the asteroid and centered on the screen - with the tracking loop activated, the camera position was updated every few seconds with a mechanical click, keeping the star at the center of the screen.
+
+an asteroid exposure usually began with guide-star tracking - then the steady clicking of the tracking loop would go silent for a period of asteroid tracking, with asteroid-light builing up a darker spot on the photograph plate - then the guide-star tracking would resume. the result was a dumbbell shape for stars, with two circular peaks connected by a trail - the asteroid was a trail with a circular peak at its midpoint. these peaks and trails became visible the next day when the plates were developed. each plate had many dumbbell shaped stellar trails - short or long, thick or thin - and at the center of the plate, a single ufo shaped asteroid-image.
+
+reducing the glass plates to digital data, and then improving the asteroid orbital parameters followed over the next weeks and months - this all took place back in austin, where the center for space research and department of aerospace engineering became involved - their expertise in orbit determination played an important role in the hubble astrometry team. the space age was roughly thirty years old at the time, and members of its first generation led the center for space research - ray duncombe, byron tapley, and bob schutz.
+
+first, the plates had to be measured using a scanner and minicomputer in the scanning room, hidden behind a nondescript door in the astronomy department library on the fifteenth floor of robert lee moore hall - better known simply as rlm. many hours passed in the scanning room - it was a meditative kind of place, cool and dark, with a steady loud drone from electronics fans. the long back wall was covered with cabinets containing thousands of glass plates, including historic sets of survey plates from palomar and the european southern observatory, alongside many plates from mcdonald.
-once the telescope was ready, camera configuration came next. with the small field of view of the telescope - effectively a high magnification - asteroids moved significantly, relative to the background stars, over an interval of around ten minutes - each asteroid was a bit different, and various orbital characteristics had to be taken into account - the direction and rate of relative motion had already been computed by the nova - now the camera body was rotated in its mounting, relative to the telescope, and programmed to move at the apparent rate of the target asteroid, so it would appear to be motionless. the rear-surface of the image dissector was a round crt screen divided into four quadrants. light from a star, cascading down through the photomultiplier tube, formed a green glow on the screen. a guide-star was found near the asteroid and centered on the screen - with the tracking loop activated, the camera position was updated once per second with a mechanical click, keeping the star at the center of the screen. an asteroid exposure usually began with guide-star tracking - then the steady clicking of the control loop would go silent for a period, and the sky would turn while asteroid-light built up a darker spot on the photograph plate - then the clicking would resume. the result was a dumbbell shape for stars, with two circular peaks connected by a trail - the asteroid was a trail with a circular peak at its midpoint. these peaks and trails became visible the next day when the plates were developed. each plate had many dumbbell shaped stellar trails - short or long, thick or thin - and at the center of the plate, a single ufo shaped asteroid-image.
+black plastic sheets from ceiling to floor formed made kind of cave of the back half of the room, and inside sat the pds microdensitometer. this was a machine for mechanically scanning photographic glass plates - an interesting time capsule of analog-era technology. light from a bulb was focused into a beam shooting downward through a mechanically driven stage with position encoders. a photometer below the stage measured the transmitted intensity while the stage moved in a raster pattern. sampling of the photometer and encoders was done by a very early, mini-fridge sized, rack-mounted sun workstation.
-reducing the glass plates to digital data and improving knowledge of the asteroid orbits followed over the next days and weeks - this all took place back in austin, where the center for space research and department of aerospace engineering became involved - their expertise in orbit determination played an important role in the hubble astrometry team - the space age was roughly thirty years old, and members of its first generation led the center for space research - ray duncombe, byron tapley, and bob schutz. first, the plates had to be measured using a scanner and minicomputer in the scanning room, hidden behind the astronomy department library on the thirteenth floor of robert lee moore hall - better known simply as rlm. many hours passed in the scanning room - it was a meditative kind of place, cool and dark, with a steady loud drone from electronics fans. the long back wall was covered with cabinets containing thousands of glass plates, including historic sets of survey plates from palomar and the european southern observatory, alongside many plates from mcdonald - black plastic sheets shielded the end of the room from stray light, and at the center of this cave sat the pds microdensitometer. this was a machine for mechanically scanning photographs - an interesting time capsule of analog-era technology. light from a bulb was focused into a beam downward through a mechanically driven stage with position encoders. a photometer below the stage measured the transmitted intensity while the stage moved in a raster pattern. sampling of the photometer and encoders was done by a very early, mini-fridge sized, rack-mounted sun workstation.
+may or june of ninety was the first observing run at mcdonald - chat among the astronomers was about serious problems with hubble that were repeatedly making headline news - there was still lots of discussion of the high gain antennas, because news of the catastrophic error in the primary-mirror hadn’t yet leaked out - overhearing the veterans during those days at mcdonald was an early revelation about the realities of science and technology. the real world seemed a bit different than the popular science coverage.
-may or june of ninety was the first observing run at mcdonald - chat among the astronomers was about serious problems with hubble that were repeatedly making headline news - there was still lots of discussion of the high gain antennas, because news of the catastrophic error in the primary-mirror hadn’t yet leaked out - overhearing the veterans during those days at mcdonald was an early revelation about the realities of science and technology. it was an eight-hour drive to west texas from austin - three or four nights were occupied with making plates with the eighty-two inch - then came the drive back to austin. texas summer heat was just beginning to get intense, and after a sweltering walk over to rlm it was nice to settle into the cool darkness of the scanning room - that little apartment could be uncomfortably warm during the day, even with the air conditioning running full blast.
+it was an eight-hour drive to west texas from austin - three or four nights were occupied with making plates with the telescope - then came the drive back to austin. texas summer heat was just beginning to get intense, and after a sweltering walk over to rlm it was nice to settle into the cool darkness of the scanning room - that little apartment at mrs zively's house could be uncomfortably warm during the day, even with the air conditioning running full blast.
-the plates were roughly the size and shape of writing paper - the glass was fairly thin and fragile - held up against a background light, the star and asteroid trails were small dark smudges. with the plate secured to the pds scanning stage, and looking across the plate’s surface, dull black trails of photographic emulsion were obvious on the surface of the glass, and the control software on the workstation had to be told which trails to scan. this meant moving the scanning beam about the plate, manually steering the stage and noting coordinates - at the top of the pds, roughly at eye level, was a circular glass screen showing a magnified image of the plate illuminated by the scanning beam - individual grains of photographic emulsion were visible, and when the beam was near a star trail it appeared as a fuzzy black worm. the stage was adjusted using two finely geared knobs, and the coordinates of the scanning beam were shown by two sets of red leds on the pds console - the corners of a rectangle about a star trail were the coordinates for a raster scan, and were entered in manually at the workstation keyboard.
+the plates were roughly the size and shape of writing paper - the glass was fairly thin and fragile - held up against a background light, the star and asteroid trails were small dark smudges. with the plate secured to the pds scanning stage, and looking across the plate’s surface, dull black trails of photographic emulsion were obvious on the surface of the glass, and the control software on the workstation had to be told which trails to scan.
-the workstation was a tall rack standing in the back corner and mounting a mini-fridge sized early sun box - on a table beside the rack was an extremely heavy old crt monitor showing one of the first primitive unix graphical user interfaces, the sunview precursor to x windows - this machine already had the antiquated feel of an earlier era. a scanning session meant creating a set of digitized raster files, one file for each trail scanned by the pds, archived on 9-track half-inch tape - a group of files, say thirty to fifty for a plate with a good exposure and lots of stars, was created in the filesystem of the workstation and then written to tape using its sibling above on the sixteenth floor, which had the tape drive - the shift over the border from analog to digital took place in the seventies style electronics connecting the pds to the workstation. a few days after scanning those first plates - paul and ray duncombe discussed the next steps in wrw, the aerospace building. there's a clear memory of the short walk from rlm to wrw - stopping in the texas sun - overhead was the typical hard blue summer sky with little white clouds, and sweat running down just seconds after stepping outside the air conditioning - the thunderbolt question has struck from a clear sky - exactly which stars were on those plates? how could those stars really, in practice, be determined, in order to determine the position of the asteroid? was there a program on the astronomy or aerospace computers to do that? the answer was, no - there wasn’t an easy or obvious solution, and helping to figure out a practical method of identifying those stars on those particular plates was the real job - not that an undergrad had any chance of even beginning to find a real solution, but even beginning to be aware of and recognize the magnitude of the problem was a huge step forward - how did one go about recognizing stars - humans could do it, but could an eighties computer system?
+this meant moving the scanning beam about the plate, manually steering the stage and noting coordinates - at the top of the pds, roughly at eye level, was a circular glass screen showing a magnified image of the plate illuminated by the scanning beam - individual grains of photographic emulsion were visible, and when the beam was near a star trail it appeared as a fuzzy black worm. the stage was adjusted using two finely geared knobs, and the coordinates of the scanning beam were shown by two sets of red leds on the pds console - the corners of a rectangle about a star trail were the coordinates for a raster scan, and were entered in manually at the workstation keyboard.
+
+the workstation was a tall rack standing in the back-right corner and mounting a mini-fridge sized early sun microsystems box - on a table beside the rack was an extremely heavy old crt monitor showing one of the first primitive unix graphical user interfaces, the sunview competitor of x windows - this machine already had the obsolete feel of an earlier era. the hardware seemed especially ponderous and heavy, as if made from scrap steel in an old-school factory - which is probably pretty close to the truth.
+
+a scanning session meant creating a set of digitized raster files, one file for each trail scanned by the pds, archived on 9-track half-inch tape - a group of files, say thirty to fifty for a plate with a good exposure and lots of stars, was created in the filesystem of the workstation and then written to tape using its sibling above on the sixteenth floor, which had the tape drive.
+
+the shift over the border from analog to digital took place in the seventies style electronics connecting the pds to the workstation. a few days after scanning those first plates, paul and ray duncombe discussed the next steps in wrw, the aerospace building. it was a short walk from rlm to wrw. overhead was the usual hard blue texas summer sky with little white clouds, and a blazing sun. sweat was running down just seconds after stepping outside the air conditioning.
+
+during that short walk, lightning struck - exactly which stars were on those plates? how could those stars really, in practice, be identified, in order to determine the position of the asteroid? was there a program on the astronomy or aerospace computers to do that?
+
+the answer was, no. there wasn’t an easy or obvious solution, and helping to figure out a practical method of identifying those particular stars on those particular plates was ultimately the real job. not that an undergrad had any chance of finding a real solution. but just becoming aware of and recognizing the magnitude of the problem was a huge step forward. how are stars recognized? humans could do it, but could an eighties computer system?
2003 - [photos](https://photos.app.goo.gl/ng8Nbxra2RYrbeWA7){:target="_blank" rel="noopener"}
-------------------------------------------------------------
-thirteen years later, the boss for the next eleven years was bob schutz - working in aerospace and the icesat group at the center for space research, mostly on star trackers - modern descendents of maritime sextants for celestial navigation - along with inertial sensors, often referred to simply as gyros. the problems once again, at root, concerned images containing a scattering of unknown stars - within aerospace, it’s a classic problem with a memorable name - the lost in space problem. given an image of some stars, exactly which stars are they? aerospace has its own perspectives, culture, and tools - astronomers don’t generally think in terms of three-dimensional unit vectors, rotation matrices, quaternions, and vector-matrix notation - it was very quickly apparent that the concerns and methods in aerospace were more widely applicable than those in astronomy - bringing together optimization, control, data fusion, high performance computing, and nn to solve practical real-world problems. within weeks of beginning, star identification was again one of the top concerns - and once again the first question was whether a practical solution was already available. pete shelus from the hubble astrometry days was a member of the group and pointed out useful directions - there was a strong sense of continuity and awareness that here was a problem that really needed addressing - the obvious differences now were that computing hardware was more powerful, and digital imaging had become standard - there was no longer an analog to digital divide to cross - everything was already in binary.
+thirteen years later, the work for the next eleven years was with bob schutz and the icesat group, directly within the center for space research. the focus was on, star trackers, modern descendents of maritime sextants for celestial navigation, and inertial sensors, often referred to simply as gyros.
+
+the problems, once again, concerned images containing a scattering of unknown stars. within aerospace, it’s a classic problem - the 'lost in space' problem. given an image of some stars, exactly which stars are they?
+
+aerospace has its own perspectives, culture, and tools - astronomers don’t generally think in terms of three-dimensional unit vectors, rotation matrices, quaternions, and vector-matrix notation. it was very quickly apparent that the concerns and methods in aerospace were more widely applicable and practical than those in astronomy, bringing together optimization, control, data fusion, high performance computing to solve practical real-world problems.
+
+within weeks of beginning, star identification was again one of the top concerns - and once again the first question was whether a practical solution was already available. pete shelus from the hubble astrometry days was an important member of the icesat group and pointed out useful directions. there was a strong sense of continuity and awareness that here was a problem that really needed addressing. the obvious differences now were that computing hardware was more powerful, and digital imaging had become standard. there was no longer an analog to digital divide to cross, everything was already in binary.
-icesat’s control system usually made it straightforward to predict which stars each image contained - this wasn’t obvious or straightforward at first and it took effort and thought to really understand the data coming from the spacecraft - there were four star imagers of three different hardware-types onboard, all sampling at ten hertz or more - these were classic eighties star trackers and didn't provide star identifications. there was also higher-frequency angular-rate data from the inertial unit, and tracking data from the control system - so a pointing vector could usually be estimated for each star-image, and it was usually enough to check whether star-images with appropriate brightnesses were near their predicted positions. brightness information tends to muddy the star identification problem because it’s relatively difficult to measure and predict for a particular imager - images have better geometric information than brightness information - an astronomer interested in brightness does photometry with dedicated sensors, not with imagers. an additional check was that angles between observed star pairs matched predictions, and one of the first objectives was to model errors in these angles from flight data - focusing on star pairs is a big step in the direction of looking at star triangles and patterns.
+icesat’s control system usually made it straightforward to predict which stars each image contained - this wasn’t obvious or straightforward at first and it took effort to really understand the data coming from the spacecraft. there were four star imagers and three different different types of hardware onboard, all sampling at ten hertz or more. these were classic eighties star trackers and didn't provide star identifications. there was also higher-frequency angular-rate data from the inertial unit, and tracking data from the control system, so a pointing vector could usually be estimated for each star-image.
-it turned out there's a fascinating, though relatively small, literature on star identification and related topics - by the second world war, many large aircraft had a bubble window facing upward for a navigator to make stellar observations - after the war, computing and imaging automated the process. the cold war brought new motivations for the technology - many people became uneasily aware of guidance systems, and while most of the massive efforts went into integrated circuits and inertial guidance sensors, automated star tracking quietly matured in parallel. star trackers are critical for spacecraft, and are used on high altitude aircraft and missiles - the classical period was the sixties through the eighties. surprisingly though, it soon became clear that there was still no publicly-available software package for the lost-in-space star identification problem - apparently, each time star identification software had been developed, it’s been within classified or industry projects. if you were seriously interested in star identification, you probably wanted to sell star trackers - that’s a fairly mature industry now.
+it was usually enough to check whether star-images with appropriate brightnesses were near their predicted positions. brightness information tends to muddy the star identification problem because it’s relatively difficult to measure and predict for a particular imager. images have better geometric information than brightness information. an astronomer interested in star brightness does photometry with dedicated sensors, not with imagers.
+
+an additional check on star recognition was that angles between observed star pairs matched predictions, and one of the first objectives was to model systematic errors in these angles from flight data. focusing on star pairs is a big step in the direction of looking at star triangles, and the core of project starid really formed at this time. at pete's suggestion, there was a bit of discussion with judit reese over in astronomy. and an early 'sourceforge' project and collaboration with undergrads happened around 2004 to 2006. this was essentially 'pre git' and sourceforge was using cvs and svn. git was just being created at that time specifically to replace cvs, svn, sourcesafe, etc.
+
+it turned out there's a fascinating, though relatively small, literature on star identification and related topics. by the second world war, many large aircraft had a bubble window facing upward for a navigator to make stellar observations. after the war, computing and imaging automated the process.
+
+the cold war brought new motivations for the technology. people became uneasily aware of guidance systems via hollywood tales of spies and gyros. while most of the massive efforts went into integrated circuits and inertial guidance sensors, automated star tracking quietly matured in parallel. star trackers are critical for spacecraft, and are used on high altitude aircraft and missiles. the classical period was the sixties through the eighties, as digital systems really made automation practical.
+
+surprisingly, though, it soon became clear that there was still no publicly-available software for the lost-in-space star identification problem. apparently, each time star identification software had been developed, it’s been within classified or industry projects. a serious interest in star identification was probably tied to selling star trackers, and that’s become a fairly mature industry.
2016 - [photos](https://photos.app.goo.gl/z54G7X9dEop1e81y6){:target="_blank" rel="noopener"}
-------------------------------------------------------------
-another thirteen years passed - excitement was growing again, after the ai winter following the eighties, around advances in neural networks - especially at google, which had just open sourced tensorflow. for a number of reasons, it was clearly time to tackle the problem directly, using both geometric and nn methods in parallel where possible - the concept was to start from scratch as a github open source project, integrating tensorflow from the beginning. this meant working in c++ eigen and python numpy - the only external input was to be a list of star positions, and nasa’s skymap star catalog was an ideal source. skymap was created in the 90s specifically for use with star trackers - we’d used it extensively for icesat, even collaborating where possible with its creators. when hubble was launched, one of its early problems was bad guide stars. as part of the overall hubble recovery effort, nasa pushed skymap forward as an improved description of the sky, as seen by standard star trackers.
+another thirteen years passed. excitement was growing again around neural networks in combination with recent hardware, especially at google, which had just open sourced tensorflow. this was definitely a curiosity, as the ai winter following the eighties was still fresh in mind. particularly as csr's offices were in the building built especially for [mcc in austin](https://en.wikipedia.org/wiki/Microelectronics_and_Computer_Technology_Corporation), which was said to have had the largest concentration of lisp machines in the world.
+
+for a number of reasons, it was clearly time to tackle the lost in space problem directly, using both geometric and network methods in parallel where possible. the concept was to start from scratch as a github open source project, integrating tensorflow from the beginning. this meant working in cpp eigen and python numpy.
+
+the only external input was to be a list of star positions, and nasa’s skymap star catalog was an ideal source. skymap was created specifically for use with star trackers, so was important for icesat. there was even some collaboration where possible with skymap people.
+
+when hubble was launched, one of its early problems was bad guide stars. as part of the overall hubble recovery effort, nasa pushed skymap forward as an improved description of the sky, as seen by standard star trackers. one side effect of all that was that a small number of people came to be involved with star trackers and star catalogs. some of those people played a role in what happened with icesat.
+
+skymap is simply a list of star positions, so how does one generate a star image? the core problem is searching for neighbors of an arbitrary point on a sphere. for example, given a list of points on earth, which of the points are near a particular latitude and longitude? the usual answers involve dividing the sphere up into tiles, transforming and subdividing, etc. even a 'square sky' is not unheard of.
+
+a more dynamic and flexible approach was published by daniele mortari. it’s related to lookup and hash tables, with some interesting quirks. it starts off by viewing stars as unit vectors with three coordinates, each between plus-one and minus-one. the idea is to search for stars within a small range around each coordinate. there are three thin rings on the sky, one around each coordinate-axis. the stars inside the small region where the rings intersect are 'near the target'. this is three independent searches for small ranges of values, followed by an intersection of the results.
+
+each search is performed on a separate precomputed key-value table. the sorted keys are from plus-one to minus-one. values represent star labels. performance can be improved by fitting a curve to the sorted keys, then using it to calculate the bounding lower and upper indexes into the table. this creates something like a range search hash-table, with the fitted curve acting as a 'range hash function'.
+
+cultural differences between aerospace and computer science quickly became apparent. basically, networks want to be about two dimensional images, while aerospace wants to be about physical three dimensional unit vectors. what happened in practice was that a kind of image 'api' grew up organically over the unit vector geometry. this happened over a period of a few months, and a curious sequence of coincidences took place.
+
+standard nineties star tracker images were eight degrees by eight degrees - 28,000 arcseconds per side, roughly sixteen times the apparent diameter of the moon. the hello world problem in networks, mnist, was standardized in the late nineties using data files and images with 28 pixels per side. adapting those standards resulted in star images with thousand-arcsecond pixels. at first, actual mnist data files were simply overwritten with star images, then fed into standard network processors. gradually, additional advantages became apparent, beyond data file format compatibility.
+
+the effects go much deeper than mearly nice rounding properties. they effectively mean low resolution, at the level of a toy camera or blurry mobile phone photo. by comparison, real star tracker images can involve sub-arcsecond resolutions. in the lingo of austin's nineties garage-rock punk scence, these mnist style star images are 'lo-fi'. usually a good thing - keeping it real.
+
+low resolution makes the star identification problem more challenging and interesting. it forces use of global structures and patterns within an image, rather than localized features and heuristics. there’s simply less information available and more has to be done with less. it even suggests questions about how the human brain recognizes stars. for example, a typical high-resolution aerospace algorithm might focus on the exact separation between a pair of stars, along with the angle to a third star. that’s clearly not how the brain works. so, what's the brain in fact doing?
+
+focusing on lo-fi mnist-like images led to a discovery. to recognize a particular star in an image, it's helpful to shift the star to the image-center and make its presence implicit. there’s no point in including it in the image, it's effectivly redundant. what’s significant is the relative-geometry of the other stars. the target star becomes the origin of the coordinate system, and if there’s another star nearby, as often happens in a low resolution image, there’s no confusion. in practice, the effects are even nicer since, in a way, there's a 'free' extra star, and there's also less need for coordinate transformations.
+
+all the way back to ninety, it was clear that the shapes of triangles formed by a star field can be used to identify the stars - and that iterative and even recursive processes could be involved. but once triangles come in, they tend to multiply, which seems uncomfortable - where does it end? skipping ahead to the answer, enlightenment arrives with a simple restatement of the lost-in-space problem.
+
+_start with a set of candidate star identities, then iteratively set aside those that can’t be correct, until only one remains._
-skymap is simply a list of star positions, so how does one generate a star image? the core problem is searching for neighbors of an arbitrary point on a sphere - for example, given a list of points on earth, which of the points are near a particular latitude and longitude? the usual answers involve dividing the sphere up into tiles, transforming and subdividing, etc - even a square-sky is not unheard of. a more dynamic and flexible approach was published by daniele mortari - it’s closely related to lookup and hash tables, but has some unique and interesting quirks - it starts off by viewing stars as unit vectors with three coordinates between plus-one and minus-one. we’re searching for stars within small ranges of each coordinate - picture three thin rings on the sky, one centering on each coordinate-axis, and finding the stars inside the small region where the rings intersect. we’re left with three independent searches for small ranges of values, followed by an intersection of the results - each search is performed on a separate precomputed key-value table, with sorted keys from plus-one to minus-one, and values representing star labels - performance can be improved by fitting a curve to the sorted floating-point keys, then using it to calculate the bounding lower and upper indexes into the table, creating something like a ranged-search hash-table with the fitted curve acting as a hash function.
+it’s brute force, and deeper insights are likely possible, but the main thing is - it works.
-cultural differences between nn and aerospace became apparent - to oversimplify, nn wants to be about two dimensional images, while aerospace wants to be about physical three dimensional unit vectors. a higher-level image interface organically grew over the lower-level unit vector geometry over a period of a few months, and a curious sequence of coincidences took place - standard nineties star tracker images were eight degrees by eight degrees - 28,000 arcseconds per side - roughly sixteen times the apparent diameter of the moon. the hello world problem in nn, mnist, was standardized in the late nineties using data files and images with 28 pixels per side. adapting those standards resulted in star images with thousand-arcsecond pixels - at first, actual mnist data files were simply overwritten with star images, then fed into standard nn processors - gradually, additional advantages became apparent, beyond data file format compatibility - the implications are deeper than nice rounding properties, since they effectively mean low resolution - at the level of a toy camera or blurry mobile phone photo - by comparison, real star tracker images can involve sub-arcsecond resolutions. low resolution makes the star identification problem more challenging and interesting - it forces use of global structures and patterns within an image, rather than localized features and heuristics - there’s simply less information available and more has to be done with less. it even suggests questions about how the human brain solves the problem and identifies stars - for example, a typical high-resolution aerospace algorithm might focus on the exact distance between a pair of stars, along with the angle to a third star - that’s clearly not how the brain works, so what's the brain in fact doing?
+between the star-level and triangle-level is the pair-level. in practice, it’s the fundamental structural unit. pair-level 'sides' make up the triangles. soon after code for mnist style star images came code for star pairs separated by less than eleven degrees on the sky. this was the fourth use of the key-value table described above, to represent nearly one million pairs as angles and member star identifiers.
-focusing on low resolution, mnist-like images led to a discovery - to identify a particular star in an image, it's helpful to shift the star to the image-center and make its presence implicit - there’s no point in including it in the image, what’s significant is the relative-geometry of the other stars. the target star becomes the origin of the coordinate system, and if there’s another star nearby, as often happens in a low resolution image, there’s no confusion - in practice, the effects are even nicer, since, in a way, there's a 'free' extra star and less need for coordinate transformations. all the way back to ninety, it was clear that the shapes of triangles formed by a star field can be used to identify the stars - and that iterative and even recursive processes could be involved - but once you start thinking about triangles, they tend to multiply, which seems uncomfortable - where does it end? skipping ahead to the answer, enlightenment arrives with a simple restatement of the lost-in-space problem - start with a set of candidate star identities and iteratively set aside those that can’t be correct until only one remains - it’s brute force, and deeper insights are likely possible - the main thing is, it works.
+the initial concept was to focus on groups of four stars instead of just three. for a triangle of three stars, adding a fourth provides significantly more information - six edges instead of three, two of which are a shared pair. two sets of possible stars for the two triangles have to agree via the shared-pair. with low resolution, this isn't as useful it sounds - there are too many pairs that meet low resolution constraints. a low resolution shared-pair just doesn’t provide enough unique information, it’s too ambiguous. in other words, at low resolution many of the skies triangles are similar.
-between the star-level and triangle-level is the pair-level - in practice, it’s the fundamental structural unit, and soon after code for star images came code for pairs separated by less than eleven degrees on the sky. this was the fourth use of the key-value table described above, to represent nearly one million pairs as angles and member star identifiers. the initial concept was to focus on groups of four stars instead of just three - for a triangle of three stars, adding a fourth provides significantly more information - six edges instead of three, two of which are a shared pair - the tradeoff is significantly more complexity. for two adjacent triangles, the shared-pair represents a new type of constraint for which stars are possible - picture two sets of possible stars for the two triangles, kept in agreement via the shared-pair. with low resolution, this is harder than it sounds - there are too many pairs that meet low resolution constraints - a low resolution shared-pair just doesn’t provide enough unique information, it’s too ambiguous - in other words, at low resolution many of the skies triangles are similar. eventually, the concept of the shared-pair became the focus - any pair of stars can be a shared-pair parent with many child-triangles. with the target star implicit in the center of an image containing ten other stars, there are ten shared-pairs that include the target star - each of these is the parent of nine child-triangles.
+eventually, the concept of the shared-pair became the real focus. any pair of stars can be a shared-pair parent of many child-triangles. with the target star implicit in the center of an image containing ten other stars, there are ten shared-pairs that include the target star. each of these is the parent of nine child-triangles.
-references
-------------------
+further reading
+================
-[personal](#anchor1), [star identification](#anchor2), [spacecraft attitude](#anchor3), [texas minor planet project](#anchor4)
+[icesat](#anchor1), [star identification](#anchor2), [spacecraft attitude](#anchor3), [texas minor planet project](#anchor4)
-personal
+icesat
+-----------------------------
2017, effect of sun shade performance on icesat-2 laser reference sensor alignment estimation, patel, smith, bae, schutz, aas advances in the astronautical sciences, [pdf](papers/2017%20aas.pdf)
@@ -114,6 +180,7 @@ references
2008, precision orbit and attitude determination for icesat, schutz, bae, smith, Sirota, aas advances in the astronautical sciences, [pdf](papers/2008%20aas.pdf)
star identification
+-----------------------------
1977, star pattern recognition for real time attitude determination, junkins, [pdf](papers/1977%20junkins.pdf)
@@ -146,10 +213,12 @@ references
2015, an autonomous star identification algortihm based on one dimensional vector pattern for star sensors, luo, [pdf](papers/2015%20luo.pdf)
spacecraft attitude
+-----------------------------
2006, the quest for better attitudes, shuster, [pdf](papers/2006%20shuster.pdf)
texas minor planet project
+-----------------------------
1986, the use of space telescope to tie the hipparcos reference frame to an extragalactic reference frame, hemenway, duncombe, astrometric techniques, [pdf](papers/1986%20hemenway.pdf)
diff --git a/index.md b/index.md
index 799414a..e69de29 100644
--- a/index.md
+++ b/index.md
@@ -1,3 +0,0 @@
-
-hello world
-
diff --git a/readme.md b/readme.md
index 5cb84cf..184aa76 100644
--- a/readme.md
+++ b/readme.md
@@ -2,7 +2,7 @@
[about](https://statespacedev.github.io/starid/about.html) - background notes and photos
-[website](https://statespacedev.github.io/starid) - project website and blog
+[website](https://statespacedev.github.io/starid) - project website
20240317 switch back to github from gitlab is looking complete - github was first, 2016 to 2018 - switch to gitlab was about account troubles and comparing capabilities, 2018 to 2024 - switch back to github seems to effectively be about 'going default'. don't feel like thinking about it - go with default. with commit email address updated, and default project website activated, looks like 'third era' is rolling from today.