It's a bit complicated because the names have different meanings depending on the context.
int
In Python
The int
is normally just a Python type, it's of arbitrary precision, meaning that you can store any conceivable integer inside it (as long as you have enough memory).
>>> int(10**50)
100000000000000000000000000000000000000000000000000
However, when you use it as dtype
for a NumPy array it will be interpreted as np.int_
1. Which is not of arbitrary precision, it will have the same size as C's long
:
>>> np.array(10**50, dtype=int)
OverflowError: Python int too large to convert to C long
That also means the following two are equivalent:
np.array([1,2,3], dtype=int)
np.array([1,2,3], dtype=np.int_)
As Cython type identifier it has another meaning, here it stands for the c type int
. It's of limited precision (typically 32bits). You can use it as Cython type, for example when defining variables with cdef
:
cdef int value = 100 # variable
cdef int[:] arr = ... # memoryview
As return value or argument value for cdef
or cpdef
functions:
cdef int my_function(int argument1, int argument2):
# ...
As "generic" for ndarray
:
cimport numpy as cnp
cdef cnp.ndarray[int, ndim=1] val = ...
For type casting:
avalue = <int>(another_value)
And probably many more.
In Cython but as Python type. You can still call int
and you'll get a "Python int" (of arbitrary precision), or use it for isinstance
or as dtype
argument for np.array
. Here the context is important, so converting to a Python int
is different from converting to a C int:
cdef object val = int(10) # Python int
cdef int val = <int>(10) # C int
np.int
Actually this is very easy. It's just an alias for int
:
>>> int is np.int
True
So everything from above applies to np.int
as well. However you can't use it as a type-identifier except when you use it on the cimport
ed package. In that case it represents the Python integer type.
cimport numpy as cnp
cpdef func(cnp.int obj):
return obj
This will expect obj
to be a Python integer not a NumPy type:
>>> func(np.int_(10))
TypeError: Argument 'obj' has incorrect type (expected int, got numpy.int32)
>>> func(10)
10
My advise regarding np.int
: Avoid it whenever possible. In Python code it's equivalent to int
and in Cython code it's also equivalent to Pythons int
but if used as type-identifier it will probably confuse you and everyone who reads the code! It certainly confused me...
np.int_
Actually it only has one meaning: It's a Python type that represents a scalar NumPy type. You use it like Pythons int
:
>>> np.int_(10) # looks like a normal Python integer
10
>>> type(np.int_(10)) # but isn't (output may vary depending on your system!)
numpy.int32
Or you use it to specify the dtype
, for example with np.array
:
>>> np.array([1,2,3], dtype=np.int_)
array([1, 2, 3])
But you cannot use it as type-identifier in Cython.
cnp.int_t
It's the type-identifier version for np.int_
. That means you can't use it as dtype argument. But you can use it as type for cdef
declarations:
cimport numpy as cnp
import numpy as np
cdef cnp.int_t[:] arr = np.array([1,2,3], dtype=np.int_)
|---TYPE---| |---DTYPE---|
This example (hopefully) shows that the type-identifier with the trailing _t
actually represents the type of an array using the dtype without the trailing t
. You can't interchange them in Cython code!
Notes
There are several more numeric types in NumPy I'll include a list containing the NumPy dtype and Cython type-identifier and the C type identifier that could also be used in Cython here. But it's basically taken from the NumPy documentation and the Cython NumPy pxd
file:
NumPy dtype Numpy Cython type C Cython type identifier
np.bool_ None None
np.int_ cnp.int_t long
np.intc None int
np.intp cnp.intp_t ssize_t
np.int8 cnp.int8_t signed char
np.int16 cnp.int16_t signed short
np.int32 cnp.int32_t signed int
np.int64 cnp.int64_t signed long long
np.uint8 cnp.uint8_t unsigned char
np.uint16 cnp.uint16_t unsigned short
np.uint32 cnp.uint32_t unsigned int
np.uint64 cnp.uint64_t unsigned long
np.float_ cnp.float64_t double
np.float32 cnp.float32_t float
np.float64 cnp.float64_t double
np.complex_ cnp.complex128_t double complex
np.complex64 cnp.complex64_t float complex
np.complex128 cnp.complex128_t double complex
Actually there are Cython types for np.bool_
: cnp.npy_bool
and bint
but both they can't be used for NumPy arrays currently. For scalars cnp.npy_bool
will just be an unsigned integer while bint
will be a boolean. Not sure what's going on there...
1 Taken From the NumPy documentation "Data type objects"
Built-in Python types
Several python types are equivalent to a corresponding array scalar when used to generate a dtype object:
int np.int_
bool np.bool_
float np.float_
complex np.cfloat
bytes np.bytes_
str np.bytes_ (Python2) or np.unicode_ (Python3)
unicode np.unicode_
buffer np.void
(all others) np.object_