Pandas doesn't know about cloud storage, and works with local files only. On Databricks you should be able to copy the file locally, so you can open it with Pandas. This could be done either with %fs cp abfss://.... file:///your-location
or with dbutils.fs.cp("abfss://....", "file:///your-location")
(see docs).
Another possibility is instead of Pandas, use the Koalas library that provides Pandas-compatible API on top of the Spark. Besides ability to access data in the cloud, you'll also get a possibility to run your code in the distributed fashion.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…