mirror of
https://github.com/sqlalchemy/sqlalchemy.git
synced 2026-05-07 01:10:52 -04:00
Run row value processors up front
as part of a larger series of changes to generalize row-tuples, RowProxy becomes plain Row and is no longer a "proxy"; the DBAPI row is now copied directly into the Row when constructed, result handling occurs at once. Subsequent changes will break out Row into a new version that behaves fully a tuple. Change-Id: I2ffa156afce5d21c38f28e54c3a531f361345dd5
This commit is contained in:
Vendored
+57
@@ -535,6 +535,63 @@ as::
|
||||
|
||||
:ticket:`4753`
|
||||
|
||||
.. _change_4710_row:
|
||||
|
||||
The "RowProxy" is no longer a "proxy", now called ``Row``
|
||||
---------------------------------------------------------
|
||||
|
||||
Since the beginning of SQLAlchemy, the Core result objects exposed to the
|
||||
user are the :class:`.ResultProxy` and ``RowProxy`` objects. The name
|
||||
"proxy" refers to the `GOF Proxy Pattern <https://en.wikipedia.org/wiki/Proxy_pattern>`_,
|
||||
emphasizing that these objects are presenting a facade around the DBAPI
|
||||
``cursor`` object and the tuple-like objects returned by methods such
|
||||
as ``cursor.fetchone()``; as methods on the result and row proxy objects
|
||||
are invoked, the underlying methods or data members of the ``cursor`` and
|
||||
the tuple-like objects returned are invoked.
|
||||
|
||||
In particular, SQLAlchemy's row-processing functions would be invoked
|
||||
as a particular column in a row is accessed. By row-processing functions,
|
||||
we refer to functions such as that of the :class:`.Unicode` datatype, which under
|
||||
Python 2 would often convert Python string objects to Python unicode
|
||||
objects, as well as numeric functions that produce ``Decimal`` objects,
|
||||
SQLite datetime functions that produce ``datetime`` objects from string
|
||||
representations, as well as any-number of user-defined functions which can
|
||||
be created using :class:`.TypeDecorator`.
|
||||
|
||||
The rationale for this pattern was performance, where the anticipated use
|
||||
case of fetching a row from a legacy database that contained dozens of
|
||||
columns would not need to run, for example, a unicode converter on every
|
||||
element of each row, if only a few columns in the row were being fetched.
|
||||
SQLAlchemy eventually gained C extensions which allowed for additional
|
||||
performance gains within this process.
|
||||
|
||||
As part of SQLAlchemy 1.4's goal of migrating towards SQLAlchemy 2.0's updated
|
||||
usage patterns, row objects will be made to behave more like tuples. To
|
||||
suit this, the "proxy" behavior of :class:`.Row` has been removed and instead
|
||||
the row is populated with its final data values upon construction. This
|
||||
in particular allows an operation such as ``obj in row`` to work as that
|
||||
of a tuple where it tests for containment of ``obj`` in the row itself,
|
||||
rather than considering it to be a key in a mapping as is the case now.
|
||||
For the moment, ``obj in row`` still does a key lookup,
|
||||
that is, detects if the row has a particular column name as ``obj``, however
|
||||
this behavior is deprecated and in 2.0 the :class:`.Row` will behave fully
|
||||
as a tuple-like object; lookup of keys will be via the ``._mapping``
|
||||
attribute.
|
||||
|
||||
The result of removing the proxy behavior from rows is that the C code has been
|
||||
simplified and the performance of many operations is improved both with and
|
||||
without the C extensions in use. Modern Python DBAPIs handle unicode
|
||||
conversion natively in most cases, and SQLAlchemy's unicode handlers are
|
||||
very fast in any case, so the expense of unicode conversion
|
||||
is a non-issue.
|
||||
|
||||
This change by itself has no behavioral impact on the row, but is part of
|
||||
a larger series of changes in :ticket:`4710` which unifies the Core row/result
|
||||
facade with that of the ORM.
|
||||
|
||||
:ticket:`4710`
|
||||
|
||||
|
||||
.. _change_4449:
|
||||
|
||||
Improved column labeling for simple column expressions using CAST or similar
|
||||
|
||||
+13
@@ -0,0 +1,13 @@
|
||||
.. change::
|
||||
:tags: feature, engine
|
||||
|
||||
The ``RowProxy`` class is no longer a "proxy" object, and is instead
|
||||
directly populated with the post-processed contents of the DBAPI row tuple
|
||||
upon construction. Now named :class:`.Row`, the mechanics of how the
|
||||
Python-level value processors have been simplified, particularly as it impacts the
|
||||
format of the C code, so that a DBAPI row is processed into a result tuple
|
||||
up front. See the migration notes for further details.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:ref:`change_4710_row`
|
||||
Vendored
+1
-1
@@ -657,7 +657,7 @@ Connection / Engine API
|
||||
:members:
|
||||
:private-members: _soft_close
|
||||
|
||||
.. autoclass:: RowProxy
|
||||
.. autoclass:: Row
|
||||
:members:
|
||||
|
||||
.. autoclass:: Transaction
|
||||
|
||||
@@ -30,24 +30,23 @@ typedef struct {
|
||||
PyObject_HEAD
|
||||
PyObject *parent;
|
||||
PyObject *row;
|
||||
PyObject *processors;
|
||||
PyObject *keymap;
|
||||
} BaseRowProxy;
|
||||
} BaseRow;
|
||||
|
||||
/****************
|
||||
* BaseRowProxy *
|
||||
* BaseRow *
|
||||
****************/
|
||||
|
||||
static PyObject *
|
||||
safe_rowproxy_reconstructor(PyObject *self, PyObject *args)
|
||||
{
|
||||
PyObject *cls, *state, *tmp;
|
||||
BaseRowProxy *obj;
|
||||
BaseRow *obj;
|
||||
|
||||
if (!PyArg_ParseTuple(args, "OO", &cls, &state))
|
||||
return NULL;
|
||||
|
||||
obj = (BaseRowProxy *)PyObject_CallMethod(cls, "__new__", "O", cls);
|
||||
obj = (BaseRow *)PyObject_CallMethod(cls, "__new__", "O", cls);
|
||||
if (obj == NULL)
|
||||
return NULL;
|
||||
|
||||
@@ -59,10 +58,10 @@ safe_rowproxy_reconstructor(PyObject *self, PyObject *args)
|
||||
Py_DECREF(tmp);
|
||||
|
||||
if (obj->parent == NULL || obj->row == NULL ||
|
||||
obj->processors == NULL || obj->keymap == NULL) {
|
||||
obj->keymap == NULL) {
|
||||
PyErr_SetString(PyExc_RuntimeError,
|
||||
"__setstate__ for BaseRowProxy subclasses must set values "
|
||||
"for parent, row, processors and keymap");
|
||||
"__setstate__ for BaseRow subclasses must set values "
|
||||
"for parent, row, and keymap");
|
||||
Py_DECREF(obj);
|
||||
return NULL;
|
||||
}
|
||||
@@ -71,30 +70,64 @@ safe_rowproxy_reconstructor(PyObject *self, PyObject *args)
|
||||
}
|
||||
|
||||
static int
|
||||
BaseRowProxy_init(BaseRowProxy *self, PyObject *args, PyObject *kwds)
|
||||
BaseRow_init(BaseRow *self, PyObject *args, PyObject *kwds)
|
||||
{
|
||||
PyObject *parent, *row, *processors, *keymap;
|
||||
PyObject *parent, *keymap, *row, *processors;
|
||||
Py_ssize_t num_values, num_processors;
|
||||
PyObject **valueptr, **funcptr, **resultptr;
|
||||
PyObject *func, *result, *processed_value, *values_fastseq;
|
||||
|
||||
if (!PyArg_UnpackTuple(args, "BaseRowProxy", 4, 4,
|
||||
&parent, &row, &processors, &keymap))
|
||||
if (!PyArg_UnpackTuple(args, "BaseRow", 4, 4,
|
||||
&parent, &processors, &keymap, &row))
|
||||
return -1;
|
||||
|
||||
Py_INCREF(parent);
|
||||
self->parent = parent;
|
||||
|
||||
if (!PySequence_Check(row)) {
|
||||
PyErr_SetString(PyExc_TypeError, "row must be a sequence");
|
||||
values_fastseq = PySequence_Fast(row, "row must be a sequence");
|
||||
if (values_fastseq == NULL)
|
||||
return -1;
|
||||
}
|
||||
Py_INCREF(row);
|
||||
self->row = row;
|
||||
|
||||
if (!PyList_CheckExact(processors)) {
|
||||
PyErr_SetString(PyExc_TypeError, "processors must be a list");
|
||||
num_values = PySequence_Length(values_fastseq);
|
||||
num_processors = PyList_Size(processors);
|
||||
if (num_values != num_processors) {
|
||||
PyErr_Format(PyExc_RuntimeError,
|
||||
"number of values in row (%d) differ from number of column "
|
||||
"processors (%d)",
|
||||
(int)num_values, (int)num_processors);
|
||||
return -1;
|
||||
}
|
||||
Py_INCREF(processors);
|
||||
self->processors = processors;
|
||||
|
||||
result = PyTuple_New(num_values);
|
||||
if (result == NULL)
|
||||
return -1;
|
||||
|
||||
valueptr = PySequence_Fast_ITEMS(values_fastseq);
|
||||
funcptr = PySequence_Fast_ITEMS(processors);
|
||||
resultptr = PySequence_Fast_ITEMS(result);
|
||||
while (--num_values >= 0) {
|
||||
func = *funcptr;
|
||||
if (func != Py_None) {
|
||||
processed_value = PyObject_CallFunctionObjArgs(
|
||||
func, *valueptr, NULL);
|
||||
if (processed_value == NULL) {
|
||||
Py_DECREF(values_fastseq);
|
||||
Py_DECREF(result);
|
||||
return -1;
|
||||
}
|
||||
*resultptr = processed_value;
|
||||
} else {
|
||||
Py_INCREF(*valueptr);
|
||||
*resultptr = *valueptr;
|
||||
}
|
||||
valueptr++;
|
||||
funcptr++;
|
||||
resultptr++;
|
||||
}
|
||||
|
||||
Py_DECREF(values_fastseq);
|
||||
|
||||
self->row = result;
|
||||
|
||||
if (!PyDict_CheckExact(keymap)) {
|
||||
PyErr_SetString(PyExc_TypeError, "keymap must be a dict");
|
||||
@@ -108,10 +141,10 @@ BaseRowProxy_init(BaseRowProxy *self, PyObject *args, PyObject *kwds)
|
||||
|
||||
/* We need the reduce method because otherwise the default implementation
|
||||
* does very weird stuff for pickle protocol 0 and 1. It calls
|
||||
* BaseRowProxy.__new__(RowProxy_instance) upon *pickling*.
|
||||
* BaseRow.__new__(Row_instance) upon *pickling*.
|
||||
*/
|
||||
static PyObject *
|
||||
BaseRowProxy_reduce(PyObject *self)
|
||||
BaseRow_reduce(PyObject *self)
|
||||
{
|
||||
PyObject *method, *state;
|
||||
PyObject *module, *reconstructor, *cls;
|
||||
@@ -147,11 +180,10 @@ BaseRowProxy_reduce(PyObject *self)
|
||||
}
|
||||
|
||||
static void
|
||||
BaseRowProxy_dealloc(BaseRowProxy *self)
|
||||
BaseRow_dealloc(BaseRow *self)
|
||||
{
|
||||
Py_XDECREF(self->parent);
|
||||
Py_XDECREF(self->row);
|
||||
Py_XDECREF(self->processors);
|
||||
Py_XDECREF(self->keymap);
|
||||
#if PY_MAJOR_VERSION >= 3
|
||||
Py_TYPE(self)->tp_free((PyObject *)self);
|
||||
@@ -161,73 +193,39 @@ BaseRowProxy_dealloc(BaseRowProxy *self)
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
BaseRowProxy_processvalues(PyObject *values, PyObject *processors, int astuple)
|
||||
BaseRow_valuescollection(PyObject *values, int astuple)
|
||||
{
|
||||
Py_ssize_t num_values, num_processors;
|
||||
PyObject **valueptr, **funcptr, **resultptr;
|
||||
PyObject *func, *result, *processed_value, *values_fastseq;
|
||||
|
||||
num_values = PySequence_Length(values);
|
||||
num_processors = PyList_Size(processors);
|
||||
if (num_values != num_processors) {
|
||||
PyErr_Format(PyExc_RuntimeError,
|
||||
"number of values in row (%d) differ from number of column "
|
||||
"processors (%d)",
|
||||
(int)num_values, (int)num_processors);
|
||||
return NULL;
|
||||
}
|
||||
PyObject *result;
|
||||
|
||||
if (astuple) {
|
||||
result = PyTuple_New(num_values);
|
||||
result = PySequence_Tuple(values);
|
||||
} else {
|
||||
result = PyList_New(num_values);
|
||||
result = PySequence_List(values);
|
||||
}
|
||||
if (result == NULL)
|
||||
return NULL;
|
||||
|
||||
values_fastseq = PySequence_Fast(values, "row must be a sequence");
|
||||
if (values_fastseq == NULL)
|
||||
return NULL;
|
||||
|
||||
valueptr = PySequence_Fast_ITEMS(values_fastseq);
|
||||
funcptr = PySequence_Fast_ITEMS(processors);
|
||||
resultptr = PySequence_Fast_ITEMS(result);
|
||||
while (--num_values >= 0) {
|
||||
func = *funcptr;
|
||||
if (func != Py_None) {
|
||||
processed_value = PyObject_CallFunctionObjArgs(func, *valueptr,
|
||||
NULL);
|
||||
if (processed_value == NULL) {
|
||||
Py_DECREF(values_fastseq);
|
||||
Py_DECREF(result);
|
||||
return NULL;
|
||||
}
|
||||
*resultptr = processed_value;
|
||||
} else {
|
||||
Py_INCREF(*valueptr);
|
||||
*resultptr = *valueptr;
|
||||
}
|
||||
valueptr++;
|
||||
funcptr++;
|
||||
resultptr++;
|
||||
}
|
||||
Py_DECREF(values_fastseq);
|
||||
return result;
|
||||
}
|
||||
|
||||
static PyListObject *
|
||||
BaseRowProxy_values(BaseRowProxy *self)
|
||||
BaseRow_values_impl(BaseRow *self)
|
||||
{
|
||||
return (PyListObject *)BaseRowProxy_processvalues(self->row,
|
||||
self->processors, 0);
|
||||
return (PyListObject *)BaseRow_valuescollection(self->row, 0);
|
||||
}
|
||||
|
||||
static Py_hash_t
|
||||
BaseRow_hash(BaseRow *self)
|
||||
{
|
||||
return PyObject_Hash(self->row);
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
BaseRowProxy_iter(BaseRowProxy *self)
|
||||
BaseRow_iter(BaseRow *self)
|
||||
{
|
||||
PyObject *values, *result;
|
||||
|
||||
values = BaseRowProxy_processvalues(self->row, self->processors, 1);
|
||||
values = BaseRow_valuescollection(self->row, 1);
|
||||
if (values == NULL)
|
||||
return NULL;
|
||||
|
||||
@@ -240,17 +238,34 @@ BaseRowProxy_iter(BaseRowProxy *self)
|
||||
}
|
||||
|
||||
static Py_ssize_t
|
||||
BaseRowProxy_length(BaseRowProxy *self)
|
||||
BaseRow_length(BaseRow *self)
|
||||
{
|
||||
return PySequence_Length(self->row);
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
BaseRowProxy_subscript(BaseRowProxy *self, PyObject *key)
|
||||
BaseRow_getitem(BaseRow *self, Py_ssize_t i)
|
||||
{
|
||||
PyObject *processors, *values;
|
||||
PyObject *processor, *value, *processed_value;
|
||||
PyObject *row, *record, *result, *indexobject;
|
||||
PyObject *value;
|
||||
PyObject *row;
|
||||
|
||||
row = self->row;
|
||||
|
||||
// row is a Tuple
|
||||
value = PyTuple_GetItem(row, i);
|
||||
|
||||
if (value == NULL)
|
||||
return NULL;
|
||||
|
||||
Py_INCREF(value);
|
||||
|
||||
return value;
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
BaseRow_getitem_by_object(BaseRow *self, PyObject *key)
|
||||
{
|
||||
PyObject *record, *indexobject;
|
||||
PyObject *exc_module, *exception, *cstr_obj;
|
||||
#if PY_MAJOR_VERSION >= 3
|
||||
PyObject *bytes;
|
||||
@@ -258,13 +273,99 @@ BaseRowProxy_subscript(BaseRowProxy *self, PyObject *key)
|
||||
char *cstr_key;
|
||||
long index;
|
||||
int key_fallback = 0;
|
||||
int tuple_check = 0;
|
||||
|
||||
// if record is non null, it's a borrowed reference
|
||||
record = PyDict_GetItem((PyObject *)self->keymap, key);
|
||||
|
||||
if (record == NULL) {
|
||||
record = PyObject_CallMethod(self->parent, "_key_fallback",
|
||||
"O", key);
|
||||
if (record == NULL)
|
||||
return NULL;
|
||||
key_fallback = 1; // boolean to indicate record is a new reference
|
||||
}
|
||||
|
||||
indexobject = PyTuple_GetItem(record, 0);
|
||||
if (indexobject == NULL)
|
||||
return NULL;
|
||||
|
||||
if (key_fallback) {
|
||||
Py_DECREF(record);
|
||||
}
|
||||
|
||||
if (indexobject == Py_None) {
|
||||
exc_module = PyImport_ImportModule("sqlalchemy.exc");
|
||||
if (exc_module == NULL)
|
||||
return NULL;
|
||||
|
||||
exception = PyObject_GetAttrString(exc_module,
|
||||
"InvalidRequestError");
|
||||
Py_DECREF(exc_module);
|
||||
if (exception == NULL)
|
||||
return NULL;
|
||||
|
||||
cstr_obj = PyTuple_GetItem(record, 2);
|
||||
if (cstr_obj == NULL)
|
||||
return NULL;
|
||||
|
||||
cstr_obj = PyObject_Str(cstr_obj);
|
||||
if (cstr_obj == NULL)
|
||||
return NULL;
|
||||
|
||||
/*
|
||||
FIXME: raise encoding error exception (in both versions below)
|
||||
if the key contains non-ascii chars, instead of an
|
||||
InvalidRequestError without any message like in the
|
||||
python version.
|
||||
*/
|
||||
|
||||
|
||||
#if PY_MAJOR_VERSION >= 3
|
||||
bytes = PyUnicode_AsASCIIString(cstr_obj);
|
||||
if (bytes == NULL)
|
||||
return NULL;
|
||||
cstr_key = PyBytes_AS_STRING(bytes);
|
||||
#else
|
||||
cstr_key = PyString_AsString(cstr_obj);
|
||||
#endif
|
||||
if (cstr_key == NULL) {
|
||||
Py_DECREF(cstr_obj);
|
||||
return NULL;
|
||||
}
|
||||
Py_DECREF(cstr_obj);
|
||||
|
||||
PyErr_Format(exception,
|
||||
"Ambiguous column name '%.200s' in "
|
||||
"result set column descriptions", cstr_key);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
#if PY_MAJOR_VERSION >= 3
|
||||
index = PyLong_AsLong(indexobject);
|
||||
#else
|
||||
index = PyInt_AsLong(indexobject);
|
||||
#endif
|
||||
if ((index == -1) && PyErr_Occurred())
|
||||
/* -1 can be either the actual value, or an error flag. */
|
||||
return NULL;
|
||||
|
||||
return BaseRow_getitem(self, index);
|
||||
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
BaseRow_subscript_impl(BaseRow *self, PyObject *key, int asmapping)
|
||||
{
|
||||
PyObject *values;
|
||||
PyObject *result;
|
||||
long index;
|
||||
|
||||
#if PY_MAJOR_VERSION < 3
|
||||
if (PyInt_CheckExact(key)) {
|
||||
index = PyInt_AS_LONG(key);
|
||||
if (index < 0)
|
||||
index += BaseRowProxy_length(self);
|
||||
index += BaseRow_length(self);
|
||||
return BaseRow_getitem(self, index);
|
||||
} else
|
||||
#endif
|
||||
|
||||
@@ -274,142 +375,46 @@ BaseRowProxy_subscript(BaseRowProxy *self, PyObject *key)
|
||||
/* -1 can be either the actual value, or an error flag. */
|
||||
return NULL;
|
||||
if (index < 0)
|
||||
index += BaseRowProxy_length(self);
|
||||
index += BaseRow_length(self);
|
||||
return BaseRow_getitem(self, index);
|
||||
} else if (PySlice_Check(key)) {
|
||||
values = PyObject_GetItem(self->row, key);
|
||||
if (values == NULL)
|
||||
return NULL;
|
||||
|
||||
processors = PyObject_GetItem(self->processors, key);
|
||||
if (processors == NULL) {
|
||||
Py_DECREF(values);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
result = BaseRowProxy_processvalues(values, processors, 1);
|
||||
result = BaseRow_valuescollection(values, 1);
|
||||
Py_DECREF(values);
|
||||
Py_DECREF(processors);
|
||||
return result;
|
||||
} else {
|
||||
record = PyDict_GetItem((PyObject *)self->keymap, key);
|
||||
if (record == NULL) {
|
||||
record = PyObject_CallMethod(self->parent, "_key_fallback",
|
||||
"O", key);
|
||||
if (record == NULL)
|
||||
return NULL;
|
||||
key_fallback = 1;
|
||||
}
|
||||
|
||||
indexobject = PyTuple_GetItem(record, 2);
|
||||
if (indexobject == NULL)
|
||||
return NULL;
|
||||
|
||||
if (key_fallback) {
|
||||
Py_DECREF(record);
|
||||
}
|
||||
|
||||
if (indexobject == Py_None) {
|
||||
exc_module = PyImport_ImportModule("sqlalchemy.exc");
|
||||
if (exc_module == NULL)
|
||||
return NULL;
|
||||
|
||||
exception = PyObject_GetAttrString(exc_module,
|
||||
"InvalidRequestError");
|
||||
Py_DECREF(exc_module);
|
||||
if (exception == NULL)
|
||||
return NULL;
|
||||
|
||||
cstr_obj = PyTuple_GetItem(record, 1);
|
||||
if (cstr_obj == NULL)
|
||||
return NULL;
|
||||
|
||||
cstr_obj = PyObject_Str(cstr_obj);
|
||||
if (cstr_obj == NULL)
|
||||
return NULL;
|
||||
|
||||
/*
|
||||
FIXME: raise encoding error exception (in both versions below)
|
||||
if the key contains non-ascii chars, instead of an
|
||||
InvalidRequestError without any message like in the
|
||||
python version.
|
||||
*/
|
||||
|
||||
|
||||
#if PY_MAJOR_VERSION >= 3
|
||||
bytes = PyUnicode_AsASCIIString(cstr_obj);
|
||||
if (bytes == NULL)
|
||||
return NULL;
|
||||
cstr_key = PyBytes_AS_STRING(bytes);
|
||||
#else
|
||||
cstr_key = PyString_AsString(cstr_obj);
|
||||
#endif
|
||||
if (cstr_key == NULL) {
|
||||
Py_DECREF(cstr_obj);
|
||||
/*
|
||||
// if we want to warn for non-integer access by getitem,
|
||||
// that would happen here.
|
||||
if (!asmapping) {
|
||||
tmp = PyObject_CallMethod(self->parent, "_warn_for_nonint", "");
|
||||
if (tmp == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
Py_DECREF(cstr_obj);
|
||||
|
||||
PyErr_Format(exception,
|
||||
"Ambiguous column name '%.200s' in "
|
||||
"result set column descriptions", cstr_key);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
#if PY_MAJOR_VERSION >= 3
|
||||
index = PyLong_AsLong(indexobject);
|
||||
#else
|
||||
index = PyInt_AsLong(indexobject);
|
||||
#endif
|
||||
if ((index == -1) && PyErr_Occurred())
|
||||
/* -1 can be either the actual value, or an error flag. */
|
||||
return NULL;
|
||||
}
|
||||
processor = PyList_GetItem(self->processors, index);
|
||||
if (processor == NULL)
|
||||
return NULL;
|
||||
|
||||
row = self->row;
|
||||
if (PyTuple_CheckExact(row)) {
|
||||
value = PyTuple_GetItem(row, index);
|
||||
tuple_check = 1;
|
||||
}
|
||||
else {
|
||||
value = PySequence_GetItem(row, index);
|
||||
tuple_check = 0;
|
||||
}
|
||||
|
||||
if (value == NULL)
|
||||
return NULL;
|
||||
|
||||
if (processor != Py_None) {
|
||||
processed_value = PyObject_CallFunctionObjArgs(processor, value, NULL);
|
||||
if (!tuple_check) {
|
||||
Py_DECREF(value);
|
||||
}
|
||||
return processed_value;
|
||||
} else {
|
||||
if (tuple_check) {
|
||||
Py_INCREF(value);
|
||||
}
|
||||
return value;
|
||||
Py_DECREF(tmp);
|
||||
}*/
|
||||
return BaseRow_getitem_by_object(self, key);
|
||||
}
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
BaseRowProxy_getitem(PyObject *self, Py_ssize_t i)
|
||||
BaseRow_subscript(BaseRow *self, PyObject *key)
|
||||
{
|
||||
PyObject *index;
|
||||
|
||||
#if PY_MAJOR_VERSION >= 3
|
||||
index = PyLong_FromSsize_t(i);
|
||||
#else
|
||||
index = PyInt_FromSsize_t(i);
|
||||
#endif
|
||||
return BaseRowProxy_subscript((BaseRowProxy*)self, index);
|
||||
return BaseRow_subscript_impl(self, key, 0);
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
BaseRowProxy_getattro(BaseRowProxy *self, PyObject *name)
|
||||
BaseRow_subscript_mapping(BaseRow *self, PyObject *key)
|
||||
{
|
||||
return BaseRow_subscript_impl(self, key, 1);
|
||||
}
|
||||
|
||||
|
||||
static PyObject *
|
||||
BaseRow_getattro(BaseRow *self, PyObject *name)
|
||||
{
|
||||
PyObject *tmp;
|
||||
#if PY_MAJOR_VERSION >= 3
|
||||
@@ -424,7 +429,7 @@ BaseRowProxy_getattro(BaseRowProxy *self, PyObject *name)
|
||||
else
|
||||
return tmp;
|
||||
|
||||
tmp = BaseRowProxy_subscript(self, name);
|
||||
tmp = BaseRow_subscript_mapping(self, name);
|
||||
if (tmp == NULL && PyErr_ExceptionMatches(PyExc_KeyError)) {
|
||||
|
||||
#if PY_MAJOR_VERSION >= 3
|
||||
@@ -453,14 +458,14 @@ BaseRowProxy_getattro(BaseRowProxy *self, PyObject *name)
|
||||
***********************/
|
||||
|
||||
static PyObject *
|
||||
BaseRowProxy_getparent(BaseRowProxy *self, void *closure)
|
||||
BaseRow_getparent(BaseRow *self, void *closure)
|
||||
{
|
||||
Py_INCREF(self->parent);
|
||||
return self->parent;
|
||||
}
|
||||
|
||||
static int
|
||||
BaseRowProxy_setparent(BaseRowProxy *self, PyObject *value, void *closure)
|
||||
BaseRow_setparent(BaseRow *self, PyObject *value, void *closure)
|
||||
{
|
||||
PyObject *module, *cls;
|
||||
|
||||
@@ -494,14 +499,14 @@ BaseRowProxy_setparent(BaseRowProxy *self, PyObject *value, void *closure)
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
BaseRowProxy_getrow(BaseRowProxy *self, void *closure)
|
||||
BaseRow_getrow(BaseRow *self, void *closure)
|
||||
{
|
||||
Py_INCREF(self->row);
|
||||
return self->row;
|
||||
}
|
||||
|
||||
static int
|
||||
BaseRowProxy_setrow(BaseRowProxy *self, PyObject *value, void *closure)
|
||||
BaseRow_setrow(BaseRow *self, PyObject *value, void *closure)
|
||||
{
|
||||
if (value == NULL) {
|
||||
PyErr_SetString(PyExc_TypeError,
|
||||
@@ -522,44 +527,17 @@ BaseRowProxy_setrow(BaseRowProxy *self, PyObject *value, void *closure)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
BaseRowProxy_getprocessors(BaseRowProxy *self, void *closure)
|
||||
{
|
||||
Py_INCREF(self->processors);
|
||||
return self->processors;
|
||||
}
|
||||
|
||||
static int
|
||||
BaseRowProxy_setprocessors(BaseRowProxy *self, PyObject *value, void *closure)
|
||||
{
|
||||
if (value == NULL) {
|
||||
PyErr_SetString(PyExc_TypeError,
|
||||
"Cannot delete the 'processors' attribute");
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (!PyList_CheckExact(value)) {
|
||||
PyErr_SetString(PyExc_TypeError,
|
||||
"The 'processors' attribute value must be a list");
|
||||
return -1;
|
||||
}
|
||||
|
||||
Py_XDECREF(self->processors);
|
||||
Py_INCREF(value);
|
||||
self->processors = value;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
BaseRowProxy_getkeymap(BaseRowProxy *self, void *closure)
|
||||
BaseRow_getkeymap(BaseRow *self, void *closure)
|
||||
{
|
||||
Py_INCREF(self->keymap);
|
||||
return self->keymap;
|
||||
}
|
||||
|
||||
static int
|
||||
BaseRowProxy_setkeymap(BaseRowProxy *self, PyObject *value, void *closure)
|
||||
BaseRow_setkeymap(BaseRow *self, PyObject *value, void *closure)
|
||||
{
|
||||
if (value == NULL) {
|
||||
PyErr_SetString(PyExc_TypeError,
|
||||
@@ -580,39 +558,39 @@ BaseRowProxy_setkeymap(BaseRowProxy *self, PyObject *value, void *closure)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static PyGetSetDef BaseRowProxy_getseters[] = {
|
||||
static PyGetSetDef BaseRow_getseters[] = {
|
||||
{"_parent",
|
||||
(getter)BaseRowProxy_getparent, (setter)BaseRowProxy_setparent,
|
||||
(getter)BaseRow_getparent, (setter)BaseRow_setparent,
|
||||
"ResultMetaData",
|
||||
NULL},
|
||||
{"_row",
|
||||
(getter)BaseRowProxy_getrow, (setter)BaseRowProxy_setrow,
|
||||
"Original row tuple",
|
||||
NULL},
|
||||
{"_processors",
|
||||
(getter)BaseRowProxy_getprocessors, (setter)BaseRowProxy_setprocessors,
|
||||
"list of type processors",
|
||||
{"_data",
|
||||
(getter)BaseRow_getrow, (setter)BaseRow_setrow,
|
||||
"processed data list",
|
||||
NULL},
|
||||
{"_keymap",
|
||||
(getter)BaseRowProxy_getkeymap, (setter)BaseRowProxy_setkeymap,
|
||||
"Key to (processor, index) dict",
|
||||
(getter)BaseRow_getkeymap, (setter)BaseRow_setkeymap,
|
||||
"Key to (obj, index) dict",
|
||||
NULL},
|
||||
{NULL}
|
||||
};
|
||||
|
||||
static PyMethodDef BaseRowProxy_methods[] = {
|
||||
{"values", (PyCFunction)BaseRowProxy_values, METH_NOARGS,
|
||||
"Return the values represented by this BaseRowProxy as a list."},
|
||||
{"__reduce__", (PyCFunction)BaseRowProxy_reduce, METH_NOARGS,
|
||||
static PyMethodDef BaseRow_methods[] = {
|
||||
{"_values_impl", (PyCFunction)BaseRow_values_impl, METH_NOARGS,
|
||||
"Return the values represented by this BaseRow as a list."},
|
||||
{"__reduce__", (PyCFunction)BaseRow_reduce, METH_NOARGS,
|
||||
"Pickle support method."},
|
||||
{"_get_by_key_impl", (PyCFunction)BaseRow_subscript, METH_O,
|
||||
"implement mapping-like getitem as well as sequence getitem"},
|
||||
{"_get_by_key_impl_mapping", (PyCFunction)BaseRow_subscript_mapping, METH_O,
|
||||
"implement mapping-like getitem as well as sequence getitem"},
|
||||
{NULL} /* Sentinel */
|
||||
};
|
||||
|
||||
static PySequenceMethods BaseRowProxy_as_sequence = {
|
||||
(lenfunc)BaseRowProxy_length, /* sq_length */
|
||||
static PySequenceMethods BaseRow_as_sequence = {
|
||||
(lenfunc)BaseRow_length, /* sq_length */
|
||||
0, /* sq_concat */
|
||||
0, /* sq_repeat */
|
||||
(ssizeargfunc)BaseRowProxy_getitem, /* sq_item */
|
||||
(ssizeargfunc)BaseRow_getitem, /* sq_item */
|
||||
0, /* sq_slice */
|
||||
0, /* sq_ass_item */
|
||||
0, /* sq_ass_slice */
|
||||
@@ -621,56 +599,235 @@ static PySequenceMethods BaseRowProxy_as_sequence = {
|
||||
0, /* sq_inplace_repeat */
|
||||
};
|
||||
|
||||
static PyMappingMethods BaseRowProxy_as_mapping = {
|
||||
(lenfunc)BaseRowProxy_length, /* mp_length */
|
||||
(binaryfunc)BaseRowProxy_subscript, /* mp_subscript */
|
||||
static PyMappingMethods BaseRow_as_mapping = {
|
||||
(lenfunc)BaseRow_length, /* mp_length */
|
||||
(binaryfunc)BaseRow_subscript_mapping, /* mp_subscript */
|
||||
0 /* mp_ass_subscript */
|
||||
};
|
||||
|
||||
static PyTypeObject BaseRowProxyType = {
|
||||
static PyTypeObject BaseRowType = {
|
||||
PyVarObject_HEAD_INIT(NULL, 0)
|
||||
"sqlalchemy.cresultproxy.BaseRowProxy", /* tp_name */
|
||||
sizeof(BaseRowProxy), /* tp_basicsize */
|
||||
"sqlalchemy.cresultproxy.BaseRow", /* tp_name */
|
||||
sizeof(BaseRow), /* tp_basicsize */
|
||||
0, /* tp_itemsize */
|
||||
(destructor)BaseRowProxy_dealloc, /* tp_dealloc */
|
||||
(destructor)BaseRow_dealloc, /* tp_dealloc */
|
||||
0, /* tp_print */
|
||||
0, /* tp_getattr */
|
||||
0, /* tp_setattr */
|
||||
0, /* tp_compare */
|
||||
0, /* tp_repr */
|
||||
0, /* tp_as_number */
|
||||
&BaseRowProxy_as_sequence, /* tp_as_sequence */
|
||||
&BaseRowProxy_as_mapping, /* tp_as_mapping */
|
||||
0, /* tp_hash */
|
||||
&BaseRow_as_sequence, /* tp_as_sequence */
|
||||
&BaseRow_as_mapping, /* tp_as_mapping */
|
||||
(hashfunc)BaseRow_hash, /* tp_hash */
|
||||
0, /* tp_call */
|
||||
0, /* tp_str */
|
||||
(getattrofunc)BaseRowProxy_getattro,/* tp_getattro */
|
||||
(getattrofunc)BaseRow_getattro,/* tp_getattro */
|
||||
0, /* tp_setattro */
|
||||
0, /* tp_as_buffer */
|
||||
Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
|
||||
"BaseRowProxy is a abstract base class for RowProxy", /* tp_doc */
|
||||
"BaseRow is a abstract base class for Row", /* tp_doc */
|
||||
0, /* tp_traverse */
|
||||
0, /* tp_clear */
|
||||
0, /* tp_richcompare */
|
||||
0, /* tp_weaklistoffset */
|
||||
(getiterfunc)BaseRowProxy_iter, /* tp_iter */
|
||||
(getiterfunc)BaseRow_iter, /* tp_iter */
|
||||
0, /* tp_iternext */
|
||||
BaseRowProxy_methods, /* tp_methods */
|
||||
BaseRow_methods, /* tp_methods */
|
||||
0, /* tp_members */
|
||||
BaseRowProxy_getseters, /* tp_getset */
|
||||
BaseRow_getseters, /* tp_getset */
|
||||
0, /* tp_base */
|
||||
0, /* tp_dict */
|
||||
0, /* tp_descr_get */
|
||||
0, /* tp_descr_set */
|
||||
0, /* tp_dictoffset */
|
||||
(initproc)BaseRowProxy_init, /* tp_init */
|
||||
(initproc)BaseRow_init, /* tp_init */
|
||||
0, /* tp_alloc */
|
||||
0 /* tp_new */
|
||||
};
|
||||
|
||||
|
||||
|
||||
/* _tuplegetter function ************************************************/
|
||||
/*
|
||||
retrieves segments of a row as tuples.
|
||||
|
||||
mostly like operator.itemgetter but calls a fixed method instead,
|
||||
returns tuple every time.
|
||||
|
||||
*/
|
||||
|
||||
typedef struct {
|
||||
PyObject_HEAD
|
||||
Py_ssize_t nitems;
|
||||
PyObject *item;
|
||||
} tuplegetterobject;
|
||||
|
||||
static PyTypeObject tuplegetter_type;
|
||||
|
||||
static PyObject *
|
||||
tuplegetter_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
|
||||
{
|
||||
tuplegetterobject *tg;
|
||||
PyObject *item;
|
||||
Py_ssize_t nitems;
|
||||
|
||||
if (!_PyArg_NoKeywords("tuplegetter", kwds))
|
||||
return NULL;
|
||||
|
||||
nitems = PyTuple_GET_SIZE(args);
|
||||
item = args;
|
||||
|
||||
tg = PyObject_GC_New(tuplegetterobject, &tuplegetter_type);
|
||||
if (tg == NULL)
|
||||
return NULL;
|
||||
|
||||
Py_INCREF(item);
|
||||
tg->item = item;
|
||||
tg->nitems = nitems;
|
||||
PyObject_GC_Track(tg);
|
||||
return (PyObject *)tg;
|
||||
}
|
||||
|
||||
static void
|
||||
tuplegetter_dealloc(tuplegetterobject *tg)
|
||||
{
|
||||
PyObject_GC_UnTrack(tg);
|
||||
Py_XDECREF(tg->item);
|
||||
PyObject_GC_Del(tg);
|
||||
}
|
||||
|
||||
static int
|
||||
tuplegetter_traverse(tuplegetterobject *tg, visitproc visit, void *arg)
|
||||
{
|
||||
Py_VISIT(tg->item);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
tuplegetter_call(tuplegetterobject *tg, PyObject *args, PyObject *kw)
|
||||
{
|
||||
PyObject *row, *result;
|
||||
Py_ssize_t i, nitems=tg->nitems;
|
||||
|
||||
assert(PyTuple_CheckExact(args));
|
||||
|
||||
// this is normally a BaseRow subclass but we are not doing
|
||||
// strict checking at the moment
|
||||
row = PyTuple_GET_ITEM(args, 0);
|
||||
|
||||
assert(PyTuple_Check(tg->item));
|
||||
assert(PyTuple_GET_SIZE(tg->item) == nitems);
|
||||
|
||||
result = PyTuple_New(nitems);
|
||||
if (result == NULL)
|
||||
return NULL;
|
||||
|
||||
for (i=0 ; i < nitems ; i++) {
|
||||
PyObject *item, *val;
|
||||
item = PyTuple_GET_ITEM(tg->item, i);
|
||||
|
||||
val = PyObject_CallMethod(row, "_get_by_key_impl_mapping", "O", item);
|
||||
|
||||
// generic itemgetter version; if BaseRow __getitem__ is implemented
|
||||
// in C directly then we can use that
|
||||
//val = PyObject_GetItem(row, item);
|
||||
if (val == NULL) {
|
||||
Py_DECREF(result);
|
||||
return NULL;
|
||||
}
|
||||
PyTuple_SET_ITEM(result, i, val);
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
tuplegetter_repr(tuplegetterobject *tg)
|
||||
{
|
||||
PyObject *repr;
|
||||
const char *reprfmt;
|
||||
|
||||
int status = Py_ReprEnter((PyObject *)tg);
|
||||
if (status != 0) {
|
||||
if (status < 0)
|
||||
return NULL;
|
||||
return PyUnicode_FromFormat("%s(...)", Py_TYPE(tg)->tp_name);
|
||||
}
|
||||
|
||||
reprfmt = tg->nitems == 1 ? "%s(%R)" : "%s%R";
|
||||
repr = PyUnicode_FromFormat(reprfmt, Py_TYPE(tg)->tp_name, tg->item);
|
||||
Py_ReprLeave((PyObject *)tg);
|
||||
return repr;
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
tuplegetter_reduce(tuplegetterobject *tg, PyObject *Py_UNUSED(ignored))
|
||||
{
|
||||
return PyTuple_Pack(2, Py_TYPE(tg), tg->item);
|
||||
}
|
||||
|
||||
PyDoc_STRVAR(reduce_doc, "Return state information for pickling");
|
||||
|
||||
static PyMethodDef tuplegetter_methods[] = {
|
||||
{"__reduce__", (PyCFunction)tuplegetter_reduce, METH_NOARGS,
|
||||
reduce_doc},
|
||||
{NULL}
|
||||
};
|
||||
|
||||
PyDoc_STRVAR(tuplegetter_doc,
|
||||
"tuplegetter(item, ...) --> tuplegetter object\n\
|
||||
\n\
|
||||
Return a callable object that fetches the given item(s) from its operand\n\
|
||||
and returns them as a tuple.\n");
|
||||
|
||||
static PyTypeObject tuplegetter_type = {
|
||||
PyVarObject_HEAD_INIT(NULL, 0)
|
||||
"sqlalchemy.engine.util..tuplegetter", /* tp_name */
|
||||
sizeof(tuplegetterobject), /* tp_basicsize */
|
||||
0, /* tp_itemsize */
|
||||
/* methods */
|
||||
(destructor)tuplegetter_dealloc, /* tp_dealloc */
|
||||
0, /* tp_vectorcall_offset */
|
||||
0, /* tp_getattr */
|
||||
0, /* tp_setattr */
|
||||
0, /* tp_as_async */
|
||||
(reprfunc)tuplegetter_repr, /* tp_repr */
|
||||
0, /* tp_as_number */
|
||||
0, /* tp_as_sequence */
|
||||
0, /* tp_as_mapping */
|
||||
0, /* tp_hash */
|
||||
(ternaryfunc)tuplegetter_call, /* tp_call */
|
||||
0, /* tp_str */
|
||||
PyObject_GenericGetAttr, /* tp_getattro */
|
||||
0, /* tp_setattro */
|
||||
0, /* tp_as_buffer */
|
||||
Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, /* tp_flags */
|
||||
tuplegetter_doc, /* tp_doc */
|
||||
(traverseproc)tuplegetter_traverse, /* tp_traverse */
|
||||
0, /* tp_clear */
|
||||
0, /* tp_richcompare */
|
||||
0, /* tp_weaklistoffset */
|
||||
0, /* tp_iter */
|
||||
0, /* tp_iternext */
|
||||
tuplegetter_methods, /* tp_methods */
|
||||
0, /* tp_members */
|
||||
0, /* tp_getset */
|
||||
0, /* tp_base */
|
||||
0, /* tp_dict */
|
||||
0, /* tp_descr_get */
|
||||
0, /* tp_descr_set */
|
||||
0, /* tp_dictoffset */
|
||||
0, /* tp_init */
|
||||
0, /* tp_alloc */
|
||||
tuplegetter_new, /* tp_new */
|
||||
0, /* tp_free */
|
||||
};
|
||||
|
||||
|
||||
|
||||
static PyMethodDef module_methods[] = {
|
||||
{"safe_rowproxy_reconstructor", safe_rowproxy_reconstructor, METH_VARARGS,
|
||||
"reconstruct a RowProxy instance from its pickled form."},
|
||||
"reconstruct a Row instance from its pickled form."},
|
||||
{NULL, NULL, 0, NULL} /* Sentinel */
|
||||
};
|
||||
|
||||
@@ -706,10 +863,13 @@ initcresultproxy(void)
|
||||
{
|
||||
PyObject *m;
|
||||
|
||||
BaseRowProxyType.tp_new = PyType_GenericNew;
|
||||
if (PyType_Ready(&BaseRowProxyType) < 0)
|
||||
BaseRowType.tp_new = PyType_GenericNew;
|
||||
if (PyType_Ready(&BaseRowType) < 0)
|
||||
INITERROR;
|
||||
|
||||
if (PyType_Ready(&tuplegetter_type) < 0)
|
||||
return NULL;
|
||||
|
||||
#if PY_MAJOR_VERSION >= 3
|
||||
m = PyModule_Create(&module_def);
|
||||
#else
|
||||
@@ -718,8 +878,11 @@ initcresultproxy(void)
|
||||
if (m == NULL)
|
||||
INITERROR;
|
||||
|
||||
Py_INCREF(&BaseRowProxyType);
|
||||
PyModule_AddObject(m, "BaseRowProxy", (PyObject *)&BaseRowProxyType);
|
||||
Py_INCREF(&BaseRowType);
|
||||
PyModule_AddObject(m, "BaseRow", (PyObject *)&BaseRowType);
|
||||
|
||||
Py_INCREF(&tuplegetter_type);
|
||||
PyModule_AddObject(m, "tuplegetter", (PyObject *)&tuplegetter_type);
|
||||
|
||||
#if PY_MAJOR_VERSION >= 3
|
||||
return m;
|
||||
|
||||
@@ -2309,7 +2309,7 @@ class MySQLDialect(default.DefaultDialect):
|
||||
"""Proxy result rows to smooth over MySQL-Python driver
|
||||
inconsistencies."""
|
||||
|
||||
return [_DecodingRowProxy(row, charset) for row in rp.fetchall()]
|
||||
return [_DecodingRow(row, charset) for row in rp.fetchall()]
|
||||
|
||||
def _compat_fetchone(self, rp, charset=None):
|
||||
"""Proxy a result row to smooth over MySQL-Python driver
|
||||
@@ -2317,7 +2317,7 @@ class MySQLDialect(default.DefaultDialect):
|
||||
|
||||
row = rp.fetchone()
|
||||
if row:
|
||||
return _DecodingRowProxy(row, charset)
|
||||
return _DecodingRow(row, charset)
|
||||
else:
|
||||
return None
|
||||
|
||||
@@ -2327,7 +2327,7 @@ class MySQLDialect(default.DefaultDialect):
|
||||
|
||||
row = rp.first()
|
||||
if row:
|
||||
return _DecodingRowProxy(row, charset)
|
||||
return _DecodingRow(row, charset)
|
||||
else:
|
||||
return None
|
||||
|
||||
@@ -2916,7 +2916,7 @@ class MySQLDialect(default.DefaultDialect):
|
||||
return rows
|
||||
|
||||
|
||||
class _DecodingRowProxy(object):
|
||||
class _DecodingRow(object):
|
||||
"""Return unicode-decoded values based on type inspection.
|
||||
|
||||
Smooth over data type issues (esp. with alpha driver versions) and
|
||||
|
||||
@@ -546,10 +546,10 @@ names are still addressable*::
|
||||
1
|
||||
|
||||
Therefore, the workaround applied by SQLAlchemy only impacts
|
||||
:meth:`.ResultProxy.keys` and :meth:`.RowProxy.keys()` in the public API. In
|
||||
:meth:`.ResultProxy.keys` and :meth:`.Row.keys()` in the public API. In
|
||||
the very specific case where an application is forced to use column names that
|
||||
contain dots, and the functionality of :meth:`.ResultProxy.keys` and
|
||||
:meth:`.RowProxy.keys()` is required to return these dotted names unmodified,
|
||||
:meth:`.Row.keys()` is required to return these dotted names unmodified,
|
||||
the ``sqlite_raw_colnames`` execution option may be provided, either on a
|
||||
per-:class:`.Connection` basis::
|
||||
|
||||
|
||||
@@ -32,13 +32,13 @@ from .interfaces import ExceptionContext # noqa
|
||||
from .interfaces import ExecutionContext # noqa
|
||||
from .interfaces import TypeCompiler # noqa
|
||||
from .mock import create_mock_engine
|
||||
from .result import BaseRowProxy # noqa
|
||||
from .result import BaseRow # noqa
|
||||
from .result import BufferedColumnResultProxy # noqa
|
||||
from .result import BufferedColumnRow # noqa
|
||||
from .result import BufferedRowResultProxy # noqa
|
||||
from .result import FullyBufferedResultProxy # noqa
|
||||
from .result import ResultProxy # noqa
|
||||
from .result import RowProxy # noqa
|
||||
from .result import Row # noqa
|
||||
from .util import connection_memoize # noqa
|
||||
from ..sql import ddl # noqa
|
||||
|
||||
|
||||
+248
-190
@@ -6,7 +6,7 @@
|
||||
# the MIT License: http://www.opensource.org/licenses/mit-license.php
|
||||
|
||||
"""Define result set constructs including :class:`.ResultProxy`
|
||||
and :class:`.RowProxy."""
|
||||
and :class:`.Row."""
|
||||
|
||||
|
||||
import collections
|
||||
@@ -17,8 +17,15 @@ from .. import util
|
||||
from ..sql import expression
|
||||
from ..sql import sqltypes
|
||||
from ..sql import util as sql_util
|
||||
from ..sql.compiler import RM_NAME
|
||||
from ..sql.compiler import RM_OBJECTS
|
||||
from ..sql.compiler import RM_RENDERED_NAME
|
||||
from ..sql.compiler import RM_TYPE
|
||||
from ..util.compat import collections_abc
|
||||
|
||||
|
||||
_UNPICKLED = util.symbol("unpickled")
|
||||
|
||||
# This reconstructor is necessary so that pickles with the C extension or
|
||||
# without use the same Binary format.
|
||||
try:
|
||||
@@ -43,21 +50,27 @@ except ImportError:
|
||||
|
||||
|
||||
try:
|
||||
from sqlalchemy.cresultproxy import BaseRowProxy
|
||||
from sqlalchemy.cresultproxy import BaseRow
|
||||
from sqlalchemy.cresultproxy import tuplegetter as _tuplegetter
|
||||
|
||||
_baserowproxy_usecext = True
|
||||
_baserow_usecext = True
|
||||
except ImportError:
|
||||
_baserowproxy_usecext = False
|
||||
_baserow_usecext = False
|
||||
|
||||
class BaseRowProxy(object):
|
||||
__slots__ = ("_parent", "_row", "_processors", "_keymap")
|
||||
class BaseRow(object):
|
||||
__slots__ = ("_parent", "_data", "_keymap")
|
||||
|
||||
def __init__(self, parent, row, processors, keymap):
|
||||
"""RowProxy objects are constructed by ResultProxy objects."""
|
||||
def __init__(self, parent, processors, keymap, data):
|
||||
"""Row objects are constructed by ResultProxy objects."""
|
||||
|
||||
self._parent = parent
|
||||
self._row = row
|
||||
self._processors = processors
|
||||
|
||||
self._data = tuple(
|
||||
[
|
||||
proc(value) if proc else value
|
||||
for proc, value in zip(processors, data)
|
||||
]
|
||||
)
|
||||
self._keymap = keymap
|
||||
|
||||
def __reduce__(self):
|
||||
@@ -66,63 +79,70 @@ except ImportError:
|
||||
(self.__class__, self.__getstate__()),
|
||||
)
|
||||
|
||||
def values(self):
|
||||
"""Return the values represented by this RowProxy as a list."""
|
||||
def _values_impl(self):
|
||||
return list(self)
|
||||
|
||||
def __iter__(self):
|
||||
for processor, value in zip(self._processors, self._row):
|
||||
if processor is None:
|
||||
yield value
|
||||
else:
|
||||
yield processor(value)
|
||||
return iter(self._data)
|
||||
|
||||
def __len__(self):
|
||||
return len(self._row)
|
||||
return len(self._data)
|
||||
|
||||
def __getitem__(self, key):
|
||||
def __hash__(self):
|
||||
return hash(self._data)
|
||||
|
||||
def _get_by_key_impl(self, key):
|
||||
try:
|
||||
processor, obj, index = self._keymap[key]
|
||||
rec = self._keymap[key]
|
||||
except KeyError:
|
||||
processor, obj, index = self._parent._key_fallback(key)
|
||||
rec = self._parent._key_fallback(key)
|
||||
except TypeError:
|
||||
# the non-C version detects a slice using TypeError.
|
||||
# this is pretty inefficient for the slice use case
|
||||
# but is more efficient for the integer use case since we
|
||||
# don't have to check it up front.
|
||||
if isinstance(key, slice):
|
||||
l = []
|
||||
for processor, value in zip(
|
||||
self._processors[key], self._row[key]
|
||||
):
|
||||
if processor is None:
|
||||
l.append(value)
|
||||
else:
|
||||
l.append(processor(value))
|
||||
return tuple(l)
|
||||
return tuple(self._data[key])
|
||||
else:
|
||||
raise
|
||||
if index is None:
|
||||
if rec[MD_INDEX] is None:
|
||||
raise exc.InvalidRequestError(
|
||||
"Ambiguous column name '%s' in "
|
||||
"result set column descriptions" % obj
|
||||
"result set column descriptions" % rec[MD_LOOKUP_KEY]
|
||||
)
|
||||
if processor is not None:
|
||||
return processor(self._row[index])
|
||||
else:
|
||||
return self._row[index]
|
||||
|
||||
return self._data[rec[MD_INDEX]]
|
||||
|
||||
def _get_by_key_impl_mapping(self, key):
|
||||
# the C code has two different methods so that we can distinguish
|
||||
# between tuple-like keys (integers, slices) and mapping-like keys
|
||||
# (strings, objects)
|
||||
return self._get_by_key_impl(key)
|
||||
|
||||
def __getattr__(self, name):
|
||||
try:
|
||||
return self[name]
|
||||
return self._get_by_key_impl_mapping(name)
|
||||
except KeyError as e:
|
||||
raise AttributeError(e.args[0])
|
||||
|
||||
|
||||
class RowProxy(BaseRowProxy):
|
||||
"""Proxy values from a single cursor row.
|
||||
class Row(BaseRow, collections_abc.Sequence):
|
||||
"""Represent a single result row.
|
||||
|
||||
The :class:`.Row` object seeks to act mostly like a Python named
|
||||
tuple, but also provides for mapping-oriented access via the
|
||||
:attr:`.Row._mapping` attribute.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:ref:`coretutorial_selecting` - includes examples of selecting
|
||||
rows from SELECT statements.
|
||||
|
||||
.. versionchanged 1.4::
|
||||
|
||||
Renamed ``RowProxy`` to :class:`.Row`. :class:`.Row` is no longer a
|
||||
"proxy" object in that it contains the final form of data within it.
|
||||
|
||||
Mostly follows "ordered dictionary" behavior, mapping result
|
||||
values to the string-based column name, the integer position of
|
||||
the result in the row, as well as Column instances which can be
|
||||
mapped to the original Columns that produced this result set (for
|
||||
results that correspond to constructed SQL expressions).
|
||||
"""
|
||||
|
||||
__slots__ = ()
|
||||
@@ -131,23 +151,22 @@ class RowProxy(BaseRowProxy):
|
||||
return self._parent._has_key(key)
|
||||
|
||||
def __getstate__(self):
|
||||
return {"_parent": self._parent, "_row": tuple(self)}
|
||||
return {"_parent": self._parent, "_data": self._data}
|
||||
|
||||
def __setstate__(self, state):
|
||||
self._parent = parent = state["_parent"]
|
||||
self._row = state["_row"]
|
||||
self._processors = parent._processors
|
||||
self._data = state["_data"]
|
||||
self._keymap = parent._keymap
|
||||
|
||||
__hash__ = None
|
||||
|
||||
def _op(self, other, op):
|
||||
return (
|
||||
op(tuple(self), tuple(other))
|
||||
if isinstance(other, RowProxy)
|
||||
if isinstance(other, Row)
|
||||
else op(tuple(self), other)
|
||||
)
|
||||
|
||||
__hash__ = BaseRow.__hash__
|
||||
|
||||
def __lt__(self, other):
|
||||
return self._op(other, operator.lt)
|
||||
|
||||
@@ -170,19 +189,22 @@ class RowProxy(BaseRowProxy):
|
||||
return repr(sql_util._repr_row(self))
|
||||
|
||||
def has_key(self, key):
|
||||
"""Return True if this RowProxy contains the given key."""
|
||||
"""Return True if this Row contains the given key."""
|
||||
|
||||
return self._parent._has_key(key)
|
||||
|
||||
def __getitem__(self, key):
|
||||
return self._get_by_key_impl(key)
|
||||
|
||||
def items(self):
|
||||
"""Return a list of tuples, each tuple containing a key/value pair."""
|
||||
# TODO: no coverage here
|
||||
return [(key, self[key]) for key in self.keys()]
|
||||
|
||||
def keys(self):
|
||||
"""Return the list of keys as strings represented by this RowProxy."""
|
||||
"""Return the list of keys as strings represented by this Row."""
|
||||
|
||||
return self._parent.keys
|
||||
return [k for k in self._parent.keys if k is not None]
|
||||
|
||||
def iterkeys(self):
|
||||
return iter(self._parent.keys)
|
||||
@@ -190,13 +212,23 @@ class RowProxy(BaseRowProxy):
|
||||
def itervalues(self):
|
||||
return iter(self)
|
||||
|
||||
def values(self):
|
||||
"""Return the values represented by this Row as a list."""
|
||||
return self._values_impl()
|
||||
|
||||
try:
|
||||
# Register RowProxy with Sequence,
|
||||
# so sequence protocol is implemented
|
||||
util.collections_abc.Sequence.register(RowProxy)
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
BaseRowProxy = BaseRow
|
||||
RowProxy = Row
|
||||
|
||||
|
||||
# metadata entry tuple indexes.
|
||||
# using raw tuple is faster than namedtuple.
|
||||
MD_INDEX = 0 # integer index in cursor.description
|
||||
MD_OBJECTS = 1 # other string keys and ColumnElement obj that can match
|
||||
MD_LOOKUP_KEY = 2 # string key we usually expect for key-based lookup
|
||||
MD_RENDERED_NAME = 3 # name that is usually in cursor.description
|
||||
MD_PROCESSOR = 4 # callable to process a result value into a row
|
||||
MD_UNTRANSLATED = 5 # raw name from cursor.description
|
||||
|
||||
|
||||
class ResultMetaData(object):
|
||||
@@ -209,7 +241,6 @@ class ResultMetaData(object):
|
||||
"matched_on_name",
|
||||
"_processors",
|
||||
"keys",
|
||||
"_orig_processors",
|
||||
)
|
||||
|
||||
def __init__(self, parent, cursor_description):
|
||||
@@ -217,12 +248,13 @@ class ResultMetaData(object):
|
||||
dialect = context.dialect
|
||||
self.case_sensitive = dialect.case_sensitive
|
||||
self.matched_on_name = False
|
||||
self._orig_processors = None
|
||||
|
||||
if context.result_column_struct:
|
||||
result_columns, cols_are_ordered, textual_ordered = (
|
||||
context.result_column_struct
|
||||
)
|
||||
(
|
||||
result_columns,
|
||||
cols_are_ordered,
|
||||
textual_ordered,
|
||||
) = context.result_column_struct
|
||||
num_ctx_cols = len(result_columns)
|
||||
else:
|
||||
result_columns = (
|
||||
@@ -241,9 +273,9 @@ class ResultMetaData(object):
|
||||
)
|
||||
|
||||
self._keymap = {}
|
||||
if not _baserowproxy_usecext:
|
||||
if not _baserow_usecext:
|
||||
# keymap indexes by integer index: this is only used
|
||||
# in the pure Python BaseRowProxy.__getitem__
|
||||
# in the pure Python BaseRow.__getitem__
|
||||
# implementation to avoid an expensive
|
||||
# isinstance(key, util.int_types) in the most common
|
||||
# case path
|
||||
@@ -251,19 +283,29 @@ class ResultMetaData(object):
|
||||
len_raw = len(raw)
|
||||
|
||||
self._keymap.update(
|
||||
[(elem[0], (elem[3], elem[4], elem[0])) for elem in raw]
|
||||
[
|
||||
(metadata_entry[MD_INDEX], metadata_entry)
|
||||
for metadata_entry in raw
|
||||
]
|
||||
+ [
|
||||
(elem[0] - len_raw, (elem[3], elem[4], elem[0]))
|
||||
for elem in raw
|
||||
(metadata_entry[MD_INDEX] - len_raw, metadata_entry)
|
||||
for metadata_entry in raw
|
||||
]
|
||||
)
|
||||
|
||||
# processors in key order for certain per-row
|
||||
# views like __iter__ and slices
|
||||
self._processors = [elem[3] for elem in raw]
|
||||
self._processors = [
|
||||
metadata_entry[MD_PROCESSOR] for metadata_entry in raw
|
||||
]
|
||||
|
||||
# keymap by primary string...
|
||||
by_key = dict([(elem[2], (elem[3], elem[4], elem[0])) for elem in raw])
|
||||
by_key = dict(
|
||||
[
|
||||
(metadata_entry[MD_LOOKUP_KEY], metadata_entry)
|
||||
for metadata_entry in raw
|
||||
]
|
||||
)
|
||||
|
||||
# for compiled SQL constructs, copy additional lookup keys into
|
||||
# the key lookup map, such as Column objects, labels,
|
||||
@@ -276,13 +318,13 @@ class ResultMetaData(object):
|
||||
# ambiguous column exception when accessed.
|
||||
if len(by_key) != num_ctx_cols:
|
||||
seen = set()
|
||||
for rec in raw:
|
||||
key = rec[1]
|
||||
for metadata_entry in raw:
|
||||
key = metadata_entry[MD_RENDERED_NAME]
|
||||
if key in seen:
|
||||
# this is an "ambiguous" element, replacing
|
||||
# the full record in the map
|
||||
key = key.lower() if not self.case_sensitive else key
|
||||
by_key[key] = (None, key, None)
|
||||
by_key[key] = (None, (), key)
|
||||
seen.add(key)
|
||||
|
||||
# copy secondary elements from compiled columns
|
||||
@@ -290,10 +332,10 @@ class ResultMetaData(object):
|
||||
# element
|
||||
self._keymap.update(
|
||||
[
|
||||
(obj_elem, by_key[elem[2]])
|
||||
for elem in raw
|
||||
if elem[4]
|
||||
for obj_elem in elem[4]
|
||||
(obj_elem, by_key[metadata_entry[MD_LOOKUP_KEY]])
|
||||
for metadata_entry in raw
|
||||
if metadata_entry[MD_OBJECTS]
|
||||
for obj_elem in metadata_entry[MD_OBJECTS]
|
||||
]
|
||||
)
|
||||
|
||||
@@ -304,9 +346,9 @@ class ResultMetaData(object):
|
||||
if not self.matched_on_name:
|
||||
self._keymap.update(
|
||||
[
|
||||
(elem[4][0], (elem[3], elem[4], elem[0]))
|
||||
for elem in raw
|
||||
if elem[4]
|
||||
(metadata_entry[MD_OBJECTS][0], metadata_entry)
|
||||
for metadata_entry in raw
|
||||
if metadata_entry[MD_OBJECTS]
|
||||
]
|
||||
)
|
||||
else:
|
||||
@@ -314,10 +356,10 @@ class ResultMetaData(object):
|
||||
# columns into self._keymap
|
||||
self._keymap.update(
|
||||
[
|
||||
(obj_elem, (elem[3], elem[4], elem[0]))
|
||||
for elem in raw
|
||||
if elem[4]
|
||||
for obj_elem in elem[4]
|
||||
(obj_elem, metadata_entry)
|
||||
for metadata_entry in raw
|
||||
if metadata_entry[MD_OBJECTS]
|
||||
for obj_elem in metadata_entry[MD_OBJECTS]
|
||||
]
|
||||
)
|
||||
|
||||
@@ -328,7 +370,14 @@ class ResultMetaData(object):
|
||||
# update keymap with "translated" names (sqlite-only thing)
|
||||
if not num_ctx_cols and context._translate_colname:
|
||||
self._keymap.update(
|
||||
[(elem[5], self._keymap[elem[2]]) for elem in raw if elem[5]]
|
||||
[
|
||||
(
|
||||
metadata_entry[MD_UNTRANSLATED],
|
||||
self._keymap[metadata_entry[MD_LOOKUP_KEY]],
|
||||
)
|
||||
for metadata_entry in raw
|
||||
if metadata_entry[MD_UNTRANSLATED]
|
||||
]
|
||||
)
|
||||
|
||||
def _merge_cursor_description(
|
||||
@@ -407,15 +456,19 @@ class ResultMetaData(object):
|
||||
return [
|
||||
(
|
||||
idx,
|
||||
key,
|
||||
name.lower() if not case_sensitive else name,
|
||||
rmap_entry[RM_OBJECTS],
|
||||
rmap_entry[RM_NAME].lower()
|
||||
if not case_sensitive
|
||||
else rmap_entry[RM_NAME],
|
||||
rmap_entry[RM_RENDERED_NAME],
|
||||
context.get_result_processor(
|
||||
type_, key, cursor_description[idx][1]
|
||||
rmap_entry[RM_TYPE],
|
||||
rmap_entry[RM_RENDERED_NAME],
|
||||
cursor_description[idx][1],
|
||||
),
|
||||
obj,
|
||||
None,
|
||||
)
|
||||
for idx, (key, name, obj, type_) in enumerate(result_columns)
|
||||
for idx, rmap_entry in enumerate(result_columns)
|
||||
]
|
||||
else:
|
||||
# name-based or text-positional cases, where we need
|
||||
@@ -440,12 +493,12 @@ class ResultMetaData(object):
|
||||
return [
|
||||
(
|
||||
idx,
|
||||
obj,
|
||||
colname,
|
||||
colname,
|
||||
context.get_result_processor(
|
||||
mapped_type, colname, coltype
|
||||
),
|
||||
obj,
|
||||
untranslated,
|
||||
)
|
||||
for (
|
||||
@@ -520,8 +573,8 @@ class ResultMetaData(object):
|
||||
) in self._colnames_from_description(context, cursor_description):
|
||||
if idx < num_ctx_cols:
|
||||
ctx_rec = result_columns[idx]
|
||||
obj = ctx_rec[2]
|
||||
mapped_type = ctx_rec[3]
|
||||
obj = ctx_rec[RM_OBJECTS]
|
||||
mapped_type = ctx_rec[RM_TYPE]
|
||||
if obj[0] in seen:
|
||||
raise exc.InvalidRequestError(
|
||||
"Duplicate column expression requested "
|
||||
@@ -537,7 +590,9 @@ class ResultMetaData(object):
|
||||
def _merge_cols_by_name(self, context, cursor_description, result_columns):
|
||||
dialect = context.dialect
|
||||
case_sensitive = dialect.case_sensitive
|
||||
result_map = self._create_result_map(result_columns, case_sensitive)
|
||||
match_map = self._create_description_match_map(
|
||||
result_columns, case_sensitive
|
||||
)
|
||||
|
||||
self.matched_on_name = True
|
||||
for (
|
||||
@@ -547,7 +602,7 @@ class ResultMetaData(object):
|
||||
coltype,
|
||||
) in self._colnames_from_description(context, cursor_description):
|
||||
try:
|
||||
ctx_rec = result_map[colname]
|
||||
ctx_rec = match_map[colname]
|
||||
except KeyError:
|
||||
mapped_type = sqltypes.NULLTYPE
|
||||
obj = None
|
||||
@@ -566,10 +621,20 @@ class ResultMetaData(object):
|
||||
yield idx, colname, sqltypes.NULLTYPE, coltype, None, untranslated
|
||||
|
||||
@classmethod
|
||||
def _create_result_map(cls, result_columns, case_sensitive=True):
|
||||
def _create_description_match_map(
|
||||
cls, result_columns, case_sensitive=True
|
||||
):
|
||||
"""when matching cursor.description to a set of names that are present
|
||||
in a Compiled object, as is the case with TextualSelect, get all the
|
||||
names we expect might match those in cursor.description.
|
||||
"""
|
||||
|
||||
d = {}
|
||||
for elem in result_columns:
|
||||
key, rec = elem[0], elem[1:]
|
||||
key, rec = (
|
||||
elem[RM_RENDERED_NAME],
|
||||
(elem[RM_NAME], elem[RM_OBJECTS], elem[RM_TYPE]),
|
||||
)
|
||||
if not case_sensitive:
|
||||
key = key.lower()
|
||||
if key in d:
|
||||
@@ -581,17 +646,16 @@ class ResultMetaData(object):
|
||||
d[key] = e_name, e_obj + rec[1], e_type
|
||||
else:
|
||||
d[key] = rec
|
||||
|
||||
return d
|
||||
|
||||
def _key_fallback(self, key, raiseerr=True):
|
||||
map_ = self._keymap
|
||||
result = None
|
||||
# lowercase col support will be deprecated, at the
|
||||
# create_engine() / dialect level
|
||||
if isinstance(key, util.string_types):
|
||||
result = map_.get(key if self.case_sensitive else key.lower())
|
||||
# fallback for targeting a ColumnElement to a textual expression
|
||||
# this is a rare use case which only occurs when matching text()
|
||||
# or colummn('name') constructs to ColumnElements, or after a
|
||||
# pickle/unpickle roundtrip
|
||||
elif isinstance(key, expression.ColumnElement):
|
||||
if (
|
||||
key._label
|
||||
@@ -610,12 +674,16 @@ class ResultMetaData(object):
|
||||
result = map_[
|
||||
key.name if self.case_sensitive else key.name.lower()
|
||||
]
|
||||
|
||||
# search extra hard to make sure this
|
||||
# isn't a column/label name overlap.
|
||||
# this check isn't currently available if the row
|
||||
# was unpickled.
|
||||
if result is not None and result[1] is not None:
|
||||
for obj in result[1]:
|
||||
if result is not None and result[MD_OBJECTS] not in (
|
||||
None,
|
||||
_UNPICKLED,
|
||||
):
|
||||
for obj in result[MD_OBJECTS]:
|
||||
if key._compare_name_for_result(obj):
|
||||
break
|
||||
else:
|
||||
@@ -639,13 +707,14 @@ class ResultMetaData(object):
|
||||
return self._key_fallback(key, False) is not None
|
||||
|
||||
def _getter(self, key, raiseerr=True):
|
||||
if key in self._keymap:
|
||||
processor, obj, index = self._keymap[key]
|
||||
else:
|
||||
ret = self._key_fallback(key, raiseerr)
|
||||
if ret is None:
|
||||
try:
|
||||
rec = self._keymap[key]
|
||||
except KeyError:
|
||||
rec = self._key_fallback(key, raiseerr)
|
||||
if rec is None:
|
||||
return None
|
||||
processor, obj, index = ret
|
||||
|
||||
index, obj = rec[0:2]
|
||||
|
||||
if index is None:
|
||||
raise exc.InvalidRequestError(
|
||||
@@ -653,29 +722,66 @@ class ResultMetaData(object):
|
||||
"result set column descriptions" % obj
|
||||
)
|
||||
|
||||
return operator.itemgetter(index)
|
||||
return operator.methodcaller("_get_by_key_impl", index)
|
||||
|
||||
def _tuple_getter(self, keys, raiseerr=True):
|
||||
"""Given a list of keys, return a callable that will deliver a tuple.
|
||||
|
||||
This is strictly used by the ORM and the keys are Column objects.
|
||||
However, this might be some nice-ish feature if we could find a very
|
||||
clean way of presenting it.
|
||||
|
||||
note that in the new world of "row._mapping", this is a mapping-getter.
|
||||
maybe the name should indicate that somehow.
|
||||
|
||||
|
||||
"""
|
||||
indexes = []
|
||||
for key in keys:
|
||||
try:
|
||||
rec = self._keymap[key]
|
||||
except KeyError:
|
||||
rec = self._key_fallback(key, raiseerr)
|
||||
if rec is None:
|
||||
return None
|
||||
|
||||
index, obj = rec[0:2]
|
||||
|
||||
if index is None:
|
||||
raise exc.InvalidRequestError(
|
||||
"Ambiguous column name '%s' in "
|
||||
"result set column descriptions" % obj
|
||||
)
|
||||
indexes.append(index)
|
||||
|
||||
if _baserow_usecext:
|
||||
return _tuplegetter(*indexes)
|
||||
else:
|
||||
return self._pure_py_tuplegetter(*indexes)
|
||||
|
||||
def _pure_py_tuplegetter(self, *indexes):
|
||||
getters = [
|
||||
operator.methodcaller("_get_by_key_impl", index)
|
||||
for index in indexes
|
||||
]
|
||||
return lambda rec: tuple(getter(rec) for getter in getters)
|
||||
|
||||
def __getstate__(self):
|
||||
return {
|
||||
"_pickled_keymap": dict(
|
||||
(key, index)
|
||||
for key, (processor, obj, index) in self._keymap.items()
|
||||
"_keymap": {
|
||||
key: (rec[MD_INDEX], _UNPICKLED, key)
|
||||
for key, rec in self._keymap.items()
|
||||
if isinstance(key, util.string_types + util.int_types)
|
||||
),
|
||||
},
|
||||
"keys": self.keys,
|
||||
"case_sensitive": self.case_sensitive,
|
||||
"matched_on_name": self.matched_on_name,
|
||||
}
|
||||
|
||||
def __setstate__(self, state):
|
||||
# the row has been processed at pickling time so we don't need any
|
||||
# processor anymore
|
||||
self._processors = [None for _ in range(len(state["keys"]))]
|
||||
self._keymap = keymap = {}
|
||||
for key, index in state["_pickled_keymap"].items():
|
||||
# not preserving "obj" here, unfortunately our
|
||||
# proxy comparison fails with the unpickle
|
||||
keymap[key] = (None, None, index)
|
||||
self._keymap = state["_keymap"]
|
||||
|
||||
self.keys = state["keys"]
|
||||
self.case_sensitive = state["case_sensitive"]
|
||||
self.matched_on_name = state["matched_on_name"]
|
||||
@@ -702,7 +808,7 @@ class ResultProxy(object):
|
||||
|
||||
"""
|
||||
|
||||
_process_row = RowProxy
|
||||
_process_row = Row
|
||||
out_parameters = None
|
||||
_autoclose_connection = False
|
||||
_metadata = None
|
||||
@@ -727,6 +833,14 @@ class ResultProxy(object):
|
||||
else:
|
||||
return getter(key, raiseerr)
|
||||
|
||||
def _tuple_getter(self, key, raiseerr=True):
|
||||
try:
|
||||
getter = self._metadata._tuple_getter
|
||||
except AttributeError:
|
||||
return self._non_result(None)
|
||||
else:
|
||||
return getter(key, raiseerr)
|
||||
|
||||
def _has_key(self, key):
|
||||
try:
|
||||
has_key = self._metadata._has_key
|
||||
@@ -745,6 +859,9 @@ class ResultProxy(object):
|
||||
if self.context.compiled._cached_metadata:
|
||||
self._metadata = self.context.compiled._cached_metadata
|
||||
else:
|
||||
# TODO: what we hope to do here is have "Legacy" be
|
||||
# the default in 1.4 but a flag (somewhere?) will have it
|
||||
# use non-legacy. ORM should be able to use non-legacy
|
||||
self._metadata = (
|
||||
self.context.compiled._cached_metadata
|
||||
) = ResultMetaData(self, cursor_description)
|
||||
@@ -1054,7 +1171,7 @@ class ResultProxy(object):
|
||||
"""Return the values of default columns that were fetched using
|
||||
the :meth:`.ValuesBase.return_defaults` feature.
|
||||
|
||||
The value is an instance of :class:`.RowProxy`, or ``None``
|
||||
The value is an instance of :class:`.Row`, or ``None``
|
||||
if :meth:`.ValuesBase.return_defaults` was not used or if the
|
||||
backend does not support RETURNING.
|
||||
|
||||
@@ -1178,16 +1295,17 @@ class ResultProxy(object):
|
||||
metadata = self._metadata
|
||||
keymap = metadata._keymap
|
||||
processors = metadata._processors
|
||||
|
||||
if self._echo:
|
||||
log = self.context.engine.logger.debug
|
||||
l = []
|
||||
for row in rows:
|
||||
log("Row %r", sql_util._repr_row(row))
|
||||
l.append(process_row(metadata, row, processors, keymap))
|
||||
l.append(process_row(metadata, processors, keymap, row))
|
||||
return l
|
||||
else:
|
||||
return [
|
||||
process_row(metadata, row, processors, keymap) for row in rows
|
||||
process_row(metadata, processors, keymap, row) for row in rows
|
||||
]
|
||||
|
||||
def fetchall(self):
|
||||
@@ -1456,76 +1574,16 @@ class FullyBufferedResultProxy(ResultProxy):
|
||||
return ret
|
||||
|
||||
|
||||
class BufferedColumnRow(RowProxy):
|
||||
def __init__(self, parent, row, processors, keymap):
|
||||
# preprocess row
|
||||
row = list(row)
|
||||
# this is a tad faster than using enumerate
|
||||
index = 0
|
||||
for processor in parent._orig_processors:
|
||||
if processor is not None:
|
||||
row[index] = processor(row[index])
|
||||
index += 1
|
||||
row = tuple(row)
|
||||
super(BufferedColumnRow, self).__init__(
|
||||
parent, row, processors, keymap
|
||||
)
|
||||
class BufferedColumnRow(Row):
|
||||
"""Row is now BufferedColumn in all cases"""
|
||||
|
||||
|
||||
class BufferedColumnResultProxy(ResultProxy):
|
||||
"""A ResultProxy with column buffering behavior.
|
||||
|
||||
``ResultProxy`` that loads all columns into memory each time
|
||||
fetchone() is called. If fetchmany() or fetchall() are called,
|
||||
the full grid of results is fetched. This is to operate with
|
||||
databases where result rows contain "live" results that fall out
|
||||
of scope unless explicitly fetched.
|
||||
|
||||
.. versionchanged:: 1.2 This :class:`.ResultProxy` is not used by
|
||||
any SQLAlchemy-included dialects.
|
||||
.. versionchanged:: 1.4 This is now the default behavior of the Row
|
||||
and this class does not change behavior in any way.
|
||||
|
||||
"""
|
||||
|
||||
_process_row = BufferedColumnRow
|
||||
|
||||
def _init_metadata(self):
|
||||
super(BufferedColumnResultProxy, self)._init_metadata()
|
||||
|
||||
metadata = self._metadata
|
||||
|
||||
# don't double-replace the processors, in the case
|
||||
# of a cached ResultMetaData
|
||||
if metadata._orig_processors is None:
|
||||
# orig_processors will be used to preprocess each row when
|
||||
# they are constructed.
|
||||
metadata._orig_processors = metadata._processors
|
||||
# replace the all type processors by None processors.
|
||||
metadata._processors = [None for _ in range(len(metadata.keys))]
|
||||
keymap = {}
|
||||
for k, (func, obj, index) in metadata._keymap.items():
|
||||
keymap[k] = (None, obj, index)
|
||||
metadata._keymap = keymap
|
||||
|
||||
def fetchall(self):
|
||||
# can't call cursor.fetchall(), since rows must be
|
||||
# fully processed before requesting more from the DBAPI.
|
||||
l = []
|
||||
while True:
|
||||
row = self.fetchone()
|
||||
if row is None:
|
||||
break
|
||||
l.append(row)
|
||||
return l
|
||||
|
||||
def fetchmany(self, size=None):
|
||||
# can't call cursor.fetchmany(), since rows must be
|
||||
# fully processed before requesting more from the DBAPI.
|
||||
if size is None:
|
||||
return self.fetchall()
|
||||
l = []
|
||||
for i in range(size):
|
||||
row = self.fetchone()
|
||||
if row is None:
|
||||
break
|
||||
l.append(row)
|
||||
return l
|
||||
|
||||
@@ -229,7 +229,7 @@ class ResourceClosedError(InvalidRequestError):
|
||||
|
||||
|
||||
class NoSuchColumnError(KeyError, InvalidRequestError):
|
||||
"""A nonexistent column is requested from a ``RowProxy``."""
|
||||
"""A nonexistent column is requested from a ``Row``."""
|
||||
|
||||
|
||||
class NoReferenceError(InvalidRequestError):
|
||||
|
||||
@@ -358,11 +358,6 @@ def _instance_processor(
|
||||
# call overhead. _instance() is the most
|
||||
# performance-critical section in the whole ORM.
|
||||
|
||||
pk_cols = mapper.primary_key
|
||||
|
||||
if adapter:
|
||||
pk_cols = [adapter.columns[c] for c in pk_cols]
|
||||
|
||||
identity_class = mapper._identity_class
|
||||
|
||||
populators = collections.defaultdict(list)
|
||||
@@ -488,6 +483,12 @@ def _instance_processor(
|
||||
else:
|
||||
refresh_identity_key = None
|
||||
|
||||
pk_cols = mapper.primary_key
|
||||
|
||||
if adapter:
|
||||
pk_cols = [adapter.columns[c] for c in pk_cols]
|
||||
tuple_getter = result._tuple_getter(pk_cols, True)
|
||||
|
||||
if mapper.allow_partial_pks:
|
||||
is_not_primary_key = _none_set.issuperset
|
||||
else:
|
||||
@@ -507,11 +508,7 @@ def _instance_processor(
|
||||
else:
|
||||
# look at the row, see if that identity is in the
|
||||
# session, or we have to create a new one
|
||||
identitykey = (
|
||||
identity_class,
|
||||
tuple([row[column] for column in pk_cols]),
|
||||
identity_token,
|
||||
)
|
||||
identitykey = (identity_class, tuple_getter(row), identity_token)
|
||||
|
||||
instance = session_identity_map.get(identitykey)
|
||||
|
||||
@@ -853,8 +850,10 @@ def _decorate_polymorphic_switch(
|
||||
|
||||
polymorphic_instances = util.PopulateDict(configure_subclass_mapper)
|
||||
|
||||
getter = result._getter(polymorphic_on)
|
||||
|
||||
def polymorphic_instance(row):
|
||||
discriminator = row[polymorphic_on]
|
||||
discriminator = getter(row)
|
||||
if discriminator is not None:
|
||||
_instance = polymorphic_instances[discriminator]
|
||||
if _instance:
|
||||
|
||||
@@ -1383,20 +1383,20 @@ class SubqueryLoader(PostLoader):
|
||||
|
||||
if self.uselist:
|
||||
self._create_collection_loader(
|
||||
context, collections, local_cols, populators
|
||||
context, result, collections, local_cols, populators
|
||||
)
|
||||
else:
|
||||
self._create_scalar_loader(
|
||||
context, collections, local_cols, populators
|
||||
context, result, collections, local_cols, populators
|
||||
)
|
||||
|
||||
def _create_collection_loader(
|
||||
self, context, collections, local_cols, populators
|
||||
self, context, result, collections, local_cols, populators
|
||||
):
|
||||
tuple_getter = result._tuple_getter(local_cols)
|
||||
|
||||
def load_collection_from_subq(state, dict_, row):
|
||||
collection = collections.get(
|
||||
tuple([row[col] for col in local_cols]), ()
|
||||
)
|
||||
collection = collections.get(tuple_getter(row), ())
|
||||
state.get_impl(self.key).set_committed_value(
|
||||
state, dict_, collection
|
||||
)
|
||||
@@ -1414,12 +1414,12 @@ class SubqueryLoader(PostLoader):
|
||||
populators["eager"].append((self.key, collections.loader))
|
||||
|
||||
def _create_scalar_loader(
|
||||
self, context, collections, local_cols, populators
|
||||
self, context, result, collections, local_cols, populators
|
||||
):
|
||||
tuple_getter = result._tuple_getter(local_cols)
|
||||
|
||||
def load_scalar_from_subq(state, dict_, row):
|
||||
collection = collections.get(
|
||||
tuple([row[col] for col in local_cols]), (None,)
|
||||
)
|
||||
collection = collections.get(tuple_getter(row), (None,))
|
||||
if len(collection) > 1:
|
||||
util.warn(
|
||||
"Multiple rows returned with "
|
||||
|
||||
@@ -297,7 +297,7 @@ def identity_key(*args, **kwargs):
|
||||
* ``identity_key(class, row=row, identity_token=token)``
|
||||
|
||||
This form is similar to the class/tuple form, except is passed a
|
||||
database result row as a :class:`.RowProxy` object.
|
||||
database result row as a :class:`.Row` object.
|
||||
|
||||
E.g.::
|
||||
|
||||
@@ -307,7 +307,7 @@ first()
|
||||
(<class '__main__.MyClass'>, (1, 2), None)
|
||||
|
||||
:param class: mapped class (must be a positional argument)
|
||||
:param row: :class:`.RowProxy` row returned by a :class:`.ResultProxy`
|
||||
:param row: :class:`.Row` row returned by a :class:`.ResultProxy`
|
||||
(must be given as a keyword arg)
|
||||
:param identity_token: optional identity token
|
||||
|
||||
|
||||
@@ -251,6 +251,12 @@ COMPOUND_KEYWORDS = {
|
||||
}
|
||||
|
||||
|
||||
RM_RENDERED_NAME = 0
|
||||
RM_NAME = 1
|
||||
RM_OBJECTS = 2
|
||||
RM_TYPE = 3
|
||||
|
||||
|
||||
class Compiled(object):
|
||||
|
||||
"""Represent a compiled SQL or DDL expression.
|
||||
@@ -710,7 +716,9 @@ class SQLCompiler(Compiled):
|
||||
@util.dependencies("sqlalchemy.engine.result")
|
||||
def _create_result_map(self, result):
|
||||
"""utility method used for unit tests only."""
|
||||
return result.ResultMetaData._create_result_map(self._result_columns)
|
||||
return result.ResultMetaData._create_description_match_map(
|
||||
self._result_columns
|
||||
)
|
||||
|
||||
def default_from(self):
|
||||
"""Called when a SELECT statement has no froms, and no FROM clause is
|
||||
|
||||
@@ -84,7 +84,18 @@ def profile_memory(
|
||||
if until_maxtimes >= maxtimes // 5:
|
||||
break
|
||||
for x in range(5):
|
||||
func(*func_args)
|
||||
try:
|
||||
func(*func_args)
|
||||
except Exception as err:
|
||||
queue.put(
|
||||
(
|
||||
"result",
|
||||
False,
|
||||
"Test raised an exception: %r" % err,
|
||||
)
|
||||
)
|
||||
|
||||
raise
|
||||
gc_collect()
|
||||
samples.append(
|
||||
get_num_objects()
|
||||
@@ -910,6 +921,7 @@ class MemUsageWBackendTest(EnsureZeroed):
|
||||
metadata.drop_all()
|
||||
assert_no_mappers()
|
||||
|
||||
@testing.expect_deprecated
|
||||
@testing.provide_metadata
|
||||
def test_key_fallback_result(self):
|
||||
e = self.engine
|
||||
|
||||
@@ -8,7 +8,7 @@ from sqlalchemy import String
|
||||
from sqlalchemy import Table
|
||||
from sqlalchemy import testing
|
||||
from sqlalchemy import Unicode
|
||||
from sqlalchemy.engine.result import RowProxy
|
||||
from sqlalchemy.engine.result import Row
|
||||
from sqlalchemy.testing import AssertsExecutionResults
|
||||
from sqlalchemy.testing import eq_
|
||||
from sqlalchemy.testing import fixtures
|
||||
@@ -149,11 +149,11 @@ class ExecutionTest(fixtures.TestBase):
|
||||
go()
|
||||
|
||||
|
||||
class RowProxyTest(fixtures.TestBase):
|
||||
class RowTest(fixtures.TestBase):
|
||||
__requires__ = ("cpython",)
|
||||
__backend__ = True
|
||||
|
||||
def _rowproxy_fixture(self, keys, processors, row):
|
||||
def _rowproxy_fixture(self, keys, processors, row, row_cls):
|
||||
class MockMeta(object):
|
||||
def __init__(self):
|
||||
pass
|
||||
@@ -161,13 +161,11 @@ class RowProxyTest(fixtures.TestBase):
|
||||
metadata = MockMeta()
|
||||
|
||||
keymap = {}
|
||||
for index, (keyobjs, processor, values) in enumerate(
|
||||
list(zip(keys, processors, row))
|
||||
):
|
||||
for index, (keyobjs, values) in enumerate(list(zip(keys, row))):
|
||||
for key in keyobjs:
|
||||
keymap[key] = (processor, key, index)
|
||||
keymap[index] = (processor, key, index)
|
||||
return RowProxy(metadata, row, processors, keymap)
|
||||
keymap[key] = (index, key)
|
||||
keymap[index] = (index, key)
|
||||
return row_cls(metadata, processors, keymap, row)
|
||||
|
||||
def _test_getitem_value_refcounts(self, seq_factory):
|
||||
col1, col2 = object(), object()
|
||||
@@ -180,6 +178,7 @@ class RowProxyTest(fixtures.TestBase):
|
||||
[(col1, "a"), (col2, "b")],
|
||||
[proc1, None],
|
||||
seq_factory([value1, value2]),
|
||||
Row,
|
||||
)
|
||||
|
||||
v1_refcount = sys.getrefcount(value1)
|
||||
|
||||
+618
-433
File diff suppressed because it is too large
Load Diff
@@ -4590,7 +4590,9 @@ class ResultMapTest(fixtures.TestBase):
|
||||
|
||||
comp = MyCompiler(default.DefaultDialect(), stmt1)
|
||||
eq_(
|
||||
ResultMetaData._create_result_map(contexts[stmt2.element][0]),
|
||||
ResultMetaData._create_description_match_map(
|
||||
contexts[stmt2.element][0]
|
||||
),
|
||||
{
|
||||
"otherid": (
|
||||
"otherid",
|
||||
|
||||
+29
-18
@@ -1,4 +1,5 @@
|
||||
from contextlib import contextmanager
|
||||
import csv
|
||||
import operator
|
||||
|
||||
from sqlalchemy import CHAR
|
||||
@@ -24,6 +25,7 @@ from sqlalchemy import util
|
||||
from sqlalchemy import VARCHAR
|
||||
from sqlalchemy.engine import default
|
||||
from sqlalchemy.engine import result as _result
|
||||
from sqlalchemy.engine import Row
|
||||
from sqlalchemy.testing import assert_raises
|
||||
from sqlalchemy.testing import assert_raises_message
|
||||
from sqlalchemy.testing import assertions
|
||||
@@ -32,6 +34,7 @@ from sqlalchemy.testing import eq_
|
||||
from sqlalchemy.testing import fixtures
|
||||
from sqlalchemy.testing import in_
|
||||
from sqlalchemy.testing import is_
|
||||
from sqlalchemy.testing import is_true
|
||||
from sqlalchemy.testing import le_
|
||||
from sqlalchemy.testing import ne_
|
||||
from sqlalchemy.testing import not_in_
|
||||
@@ -39,6 +42,7 @@ from sqlalchemy.testing.mock import Mock
|
||||
from sqlalchemy.testing.mock import patch
|
||||
from sqlalchemy.testing.schema import Column
|
||||
from sqlalchemy.testing.schema import Table
|
||||
from sqlalchemy.util import collections_abc
|
||||
|
||||
|
||||
class ResultProxyTest(fixtures.TablesTest):
|
||||
@@ -1043,10 +1047,13 @@ class ResultProxyTest(fixtures.TablesTest):
|
||||
eq_(r["_row"], "Hidden row")
|
||||
|
||||
def test_nontuple_row(self):
|
||||
"""ensure the C version of BaseRowProxy handles
|
||||
duck-type-dependent rows."""
|
||||
"""ensure the C version of BaseRow handles
|
||||
duck-type-dependent rows.
|
||||
|
||||
from sqlalchemy.engine import RowProxy
|
||||
|
||||
As of 1.4 they are converted internally to tuples in any case.
|
||||
|
||||
"""
|
||||
|
||||
class MyList(object):
|
||||
def __init__(self, data):
|
||||
@@ -1058,11 +1065,11 @@ class ResultProxyTest(fixtures.TablesTest):
|
||||
def __getitem__(self, i):
|
||||
return list.__getitem__(self.internal_list, i)
|
||||
|
||||
proxy = RowProxy(
|
||||
proxy = Row(
|
||||
object(),
|
||||
MyList(["value"]),
|
||||
[None],
|
||||
{"key": (None, None, 0), 0: (None, None, 0)},
|
||||
{"key": (0, None, "key"), 0: (0, None, "key")},
|
||||
MyList(["value"]),
|
||||
)
|
||||
eq_(list(proxy), ["value"])
|
||||
eq_(proxy[0], "value")
|
||||
@@ -1108,20 +1115,25 @@ class ResultProxyTest(fixtures.TablesTest):
|
||||
engine.execute(t.delete())
|
||||
eq_(len(mock_rowcount.__get__.mock_calls), 2)
|
||||
|
||||
def test_rowproxy_is_sequence(self):
|
||||
from sqlalchemy.util import collections_abc
|
||||
from sqlalchemy.engine import RowProxy
|
||||
def test_row_is_sequence(self):
|
||||
|
||||
row = RowProxy(
|
||||
object(),
|
||||
["value"],
|
||||
[None],
|
||||
{"key": (None, None, 0), 0: (None, None, 0)},
|
||||
row = Row(
|
||||
object(), [None], {"key": (None, 0), 0: (None, 0)}, ["value"]
|
||||
)
|
||||
assert isinstance(row, collections_abc.Sequence)
|
||||
is_true(isinstance(row, collections_abc.Sequence))
|
||||
|
||||
def test_row_is_hashable(self):
|
||||
|
||||
row = Row(
|
||||
object(),
|
||||
[None, None, None],
|
||||
{"key": (None, 0), 0: (None, 0)},
|
||||
(1, "value", "foo"),
|
||||
)
|
||||
eq_(hash(row), hash((1, "value", "foo")))
|
||||
|
||||
@testing.provide_metadata
|
||||
def test_rowproxy_getitem_indexes_compiled(self):
|
||||
def test_row_getitem_indexes_compiled(self):
|
||||
values = Table(
|
||||
"rp",
|
||||
self.metadata,
|
||||
@@ -1141,7 +1153,7 @@ class ResultProxyTest(fixtures.TablesTest):
|
||||
eq_(row[1:0:-1], ("Uno",))
|
||||
|
||||
@testing.only_on("sqlite")
|
||||
def test_rowproxy_getitem_indexes_raw(self):
|
||||
def test_row_getitem_indexes_raw(self):
|
||||
row = testing.db.execute("select 'One' as key, 'Uno' as value").first()
|
||||
eq_(row["key"], "One")
|
||||
eq_(row["value"], "Uno")
|
||||
@@ -1153,7 +1165,6 @@ class ResultProxyTest(fixtures.TablesTest):
|
||||
|
||||
@testing.requires.cextensions
|
||||
def test_row_c_sequence_check(self):
|
||||
import csv
|
||||
|
||||
metadata = MetaData()
|
||||
metadata.bind = "sqlite://"
|
||||
|
||||
Reference in New Issue
Block a user