Common compression algorithms can already save a lot of space on common strings, and this would be a very good case if you have pages from the same website. Instead of saving and compressing the pages individually, try to put similar pages in the same file and compress that. It should make the compression rate much better.
You should try to make the window size of your compression larger than the average page in order to make it utilize the data of previous pages.
If you are saving them to disk or using a database, you can use the domain name or the first X characters of the URL as your index to find the compressed file, so pages under the same domain/directory will naturally get compressed together.
Another approach you can take is to create "dictionaries" for your compression. Some compression algorithms allow you to give a few example files to train a dictionary that can then be used to compress similar files better. An example of an algorithm that does this is zstd. You can read how to use the dictionary feature here.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…