Using the Python Disco project for example.
Good. Play with that.
Using the RHIPE package and finding toy datasets and problem areas.
Fine. Play with that, too.
Don't sweat finding "big" datasets. Even small datasets present very interesting problems. Indeed, any dataset is a starting-off point.
I once built a small star-schema to analyze the $60M budget of an organization. The source data was in spreadsheets, and essentially incomprehensible. So I unloaded it into a star schema and wrote several analytical programs in Python to create simplified reports of the relevant numbers.
Finding the right information to allow me to decide if I need to move to NoSQL from RDBMS type databases
This is easy.
First, get a book on data warehousing (Ralph Kimball's The Data Warehouse Toolkit) for example.
Second, study the "Star Schema" carefully -- particularly all the variants and special cases that Kimball explains (in depth)
Third, realize the following: SQL is for Updates and Transactions.
When doing "analytical" processing (big or small) there's almost no update of any kind. SQL (and related normalization) don't really matter much any more.
Kimball's point (and others, too) is that most of your data warehouse is not in SQL, it's in simple Flat Files. A data mart (for ad-hoc, slice-and-dice analysis) may be in a relational database to permit easy, flexible processing with SQL.
So the "decision" is trivial. If it's transactional ("OLTP") it must be in a Relational or OO DB. If it's analytical ("OLAP") it doesn't require SQL except for slice-and-dice analytics; and even then the DB is loaded from the official files as needed.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…