I am going to write a new functionality in my code that processes some XML data in a memory/CPU-efficient way. It will mainly analyze data from files, streams, byte arrays, etc., so the SAXParser seems to fits all of the above requirements.
Unfortunately, this new functionality will also need to analyze some XML data that is generated by older code that uses DOM solutions and returns Document classes.
Of course I could save that DOM Document to a file/stream/byte array etc. and then use SAXParser to process it, but such solution would require an additional memory space to save that data which is completely unnecessary from the data processing perspective.
Therefore I'm looking for some kind of DOM document crawler that reads already existing DOM data but uses SAX handlers to process it, which would allow me to both implement the basic processing logic only once in my custom SAX handler and also use any kind input data.
Have you encountered anything like this?
question from:
https://stackoverflow.com/questions/66061876/how-to-analyze-a-java-dom-document-using-a-sax-handler 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…