Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
289 views
in Technique[技术] by (71.8m points)

python - Using a custom object in pandas.read_csv()

I am interested in streaming a custom object into a pandas dataframe. According to the documentation, any object with a read() method can be used. However, even after implementing this function I am still getting this error:

ValueError: Invalid file path or buffer object type: <class '__main__.DataFile'>

Here is a simple version of the object, and how I am calling it:

class DataFile(object):
    def __init__(self, files):
        self.files = files

    def read(self):
        for file_name in self.files:
            with open(file_name, 'r') as file:
                for line in file:
                    yield line

import pandas as pd
hours = ['file1.csv', 'file2.csv', 'file3.csv']

data = DataFile(hours)
df = pd.read_csv(data)

Am I missing something, or is it just not possible to use a custom generator in Pandas? When I call the read() method it works just fine.

EDIT: The reason I want to use a custom object rather than concatenating the dataframes together is to see if it is possible to reduce memory usage. I have used the gensim library in the past, and it makes it really easy to use custom data objects, so I was hoping to find some similar approach.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

One way to make a file-like object in Python3 by subclassing io.RawIOBase. And using Mechanical snail's iterstream, you can convert any iterable of bytes into a file-like object:

import tempfile
import io
import pandas as pd

def iterstream(iterable, buffer_size=io.DEFAULT_BUFFER_SIZE):
    """
    http://stackoverflow.com/a/20260030/190597 (Mechanical snail)
    Lets you use an iterable (e.g. a generator) that yields bytestrings as a
    read-only input stream.

    The stream implements Python 3's newer I/O API (available in Python 2's io
    module).

    For efficiency, the stream is buffered.
    """
    class IterStream(io.RawIOBase):
        def __init__(self):
            self.leftover = None
        def readable(self):
            return True
        def readinto(self, b):
            try:
                l = len(b)  # We're supposed to return at most this much
                chunk = self.leftover or next(iterable)
                output, self.leftover = chunk[:l], chunk[l:]
                b[:len(output)] = output
                return len(output)
            except StopIteration:
                return 0    # indicate EOF
    return io.BufferedReader(IterStream(), buffer_size=buffer_size)


class DataFile(object):
    def __init__(self, files):
        self.files = files

    def read(self):
        for file_name in self.files:
            with open(file_name, 'rb') as f:
                for line in f:
                    yield line

def make_files(num):
    filenames = []
    for i in range(num):
        with tempfile.NamedTemporaryFile(mode='wb', delete=False) as f:
            f.write(b'''1,2,3
4,5,6
''')
            filenames.append(f.name)
    return filenames

# hours = ['file1.csv', 'file2.csv', 'file3.csv']
hours = make_files(3)
print(hours)
data = DataFile(hours)
df = pd.read_csv(iterstream(data.read()), header=None)

print(df)

prints

   0  1  2
0  1  2  3
1  4  5  6
2  1  2  3
3  4  5  6
4  1  2  3
5  4  5  6

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...