One of the reasons I'm releasing PySTDF is that I haven't seen anything quite like it out there. I have seen a few projects in this space, as well as plenty of commercial tools for working with STDF. But nothing quite like PySTDF -- so how is PySTDF different?
Stream-orientedPySTDF was designed first and foremost to be an event-based parser. If you are familar with XML, this is a similar approach to SAX parsing. If you are not, no problem. The idea is that you set up actions to handle all the different record types. This has many advantages:
- Very fast
- Low memory overhead
- Very flexible
PythonPython is used in scientific applications, such as biotechnology and physics. I think the reason for this is that scientists are more concerned with solving problems and playing with data -- Python's simple lanaguage doesn't get in the way, and performs well enough to get the job done.
Deals with STDF's wartsSTDF is ubiquitous, and convenient as a standard datalog format -- but how standard is it? Or usable? In my experience I have struggled with many of STDF's warts:
- STDF has some really strange data types -- like variable length bit-fields, fields that specify the size of other fields, and other weirdness. Writing a parser for all these cases isn't fun.
- STDF is hard to use
Semiconductor test engineers should be able to play with data without having to deal with the messiness of the STDF format. You were probably hoping to load that data into some kind of statistical analysis tool, right?
- Broken, dirty data
STDF data is only as good as the ATE vendor's implementation of the format, and the identifiers used in the testing process. In my experience, there are many cases where the data needs to be repaired, cleaned or otherwise preprocessed. A stream-oriented parser is well-suited to solve many of these issues.
- C libraries
Engineers need to be able to play with data, not wrastle with compilers, memory allocation, and pointers. Programming in C gets in the way of experimentation and rapid application development.