This repository contains the tntcxx Tarantool C++ connector code. tntcxx is an open-source Tarantool C++ connector (compliant to C++17) designed with high efficiency in mind.
CMake is the official build system for tntcxx.
tntcxx comes with a CMake build script (CMakeLists.txt) that can be used on a wide range of platforms ("C" stands for cross-platform.). If you don't have CMake installed already, you can download it for free from https://www.cmake.org/. CMake works by generating native makefiles or build projects that can be used in the compiler environment of your choice. For API/ABI compatibility reasons, we strongly recommend building tntcxx in a subdirectory of your project or as an embedded dependency.
- Make tntcxx's source code available to the main build. This can be done a few
different ways:
- Download the tntcxx source code manually and place it at a known location. This is the least flexible approach and can make it more difficult to use with continuous integration systems, etc.
- Embed the tntcxx source code as a direct copy in the main project's source tree. This is often the simplest approach, but is also the hardest to keep up to date. Some organizations may not permit this method.
- Add tntcxx as a git submodule or equivalent. This may not always be possible or appropriate. Git submodules, for example, have their own set of advantages and drawbacks.
- Use the CMake
FetchContent
commands to download tntcxx as part of the build's configure step. This approach doesn't have the limitations of the other methods.
The last of the above methods is implemented with a small piece of CMake code that downloads and pulls the tntcxx code into the main build. Just add the following snippet to your CMakeLists.txt:
include(FetchContent)
FetchContent_Declare(
tntcxx
GIT_REPOSITORY https://github.com/tarantool/tntcxx.git
)
FetchContent_MakeAvailable(tntcxx)
After obtaining tntcxx sources using the rest of the methods, you can use the following CMake command to incorporate tntcxx into your CMake project:
add_subdirectory(${TNTCXX_SOURCE_DIR})
- Now simply link against the tntcxx::tntcxx target as needed:
add_executable(example example.cpp)
target_link_libraries(example tntcxx::tntcxx)
Use the -DTNTCXX_BUILD_TESTING=ON
option to run the tntcxx tests. This option
is enabled by default if the tntcxx project is determined to be the top level
project. Note that BUILD_TESTING
must also be on (the default).
For example, to run the tntcxx tests, you could use this script:
cd path/to/tntcxx
mkdir build
cd build
cmake -DTNTCXX_BUILD_TESTING=ON ..
make -j
ctest
-DTNTCXX_BUILD_TESTING=ON
must be set to enable testing. This option is enabled by default if the tntcxx project is determined to be the top level project.
There are three main parts of C++ connector: IO-zero-copy buffer, msgpack encoder/decoder and client handling requests itself.
Buffer is parameterized by allocator, which means that users are able to choose which allocator will be used to provide memory for buffer's blocks. Data is orginized into linked list of blocks of fixed size which is specified as template parameter of buffer.
TODO: see src/Client/Connection.hpp and src/Client/Connector.hpp
Connector can be embedded in any C++ application with including main header:
#include "<path-to-cloned-repo>/src/Client/Connector.hpp"
To create client one should specify buffer's and network provider's implementations as template parameters. Connector's main class has the following signature:
template<class BUFFER, class NetProvider = EpollNetProvider<BUFFER>>
class Connector;
If one don't want to bother with implementing its own buffer or network provider,
one can use default one: tnt::Buffer<16 * 1024>
and
EpollNetProvider<tnt::Buffer<16 * 1024>>
.
So the default instantiation would look
like:
using Buf_t = tnt::Buffer<16 * 1024>;
using Net_t = EpollNetProvider<Buf_t >;
Connector<Buf_t, Net_t> client;
Client itself is not enough to work with Tarantool instances, so let's also create connection objects. Connection takes buffer and network provider as template parameters as well (note that they must be the same as ones of client):
Connection<Buf_t, Net_t> conn(client);
Now assume Tarantool instance is listening 3301
port on localhost. To connect
to the server we should invoke Connector::connect()
method of client object and
pass three arguments: connection instance, address and port.
int rc = client.connect(conn, address, port)
.
Implementation of connector is exception
free, so we rely on return codes: in case of fail, connect()
will return rc < 0
.
To get error message corresponding to the last error happened during communication
with server, we can invoke Connection::getError()
method:
if (rc != 0) {
assert(conn.status.is_failed);
std::cerr << conn.getError() << std::endl;
}
To reset connection after errors (clean up error message and connection status),
one can use Connection::reset()
.
To execute simplest request (i.e. ping), one can invoke corresponding method of
connection object:
rid_t ping = conn.ping();
Each request method returns request id, which is sort of future. It can be used
to get the result of request execution once it is ready (i.e. response). Requests
are queued in the input buffer of connection until Connector::wait()
is called.
That said, to send requests to the server side, we should invoke client.wait()
:
client.wait(conn, ping, WAIT_TIMEOUT);
Basically, wait()
takes connection to poll (both IN and OUT), request id and
optionally timeout (in milliseconds) parameters. once response for specified
request is ready, wait()
terminates. It also provides negative return code in
case of system related fails (e.g. broken or time outed connection). If wait()
returns 0, then response is received and expected to be parsed.
To get the response when it is ready, we can use Connection::getResponse()
.
It takes request id and returns optional object containing response (nullptr
in case response is not ready yet). Note that on each future it can be called
only once: getResponse()
erases request id from internal map once it is
returned to user.
std::optional<Response<Buf_t>> response = conn.getResponse(ping);
Response consists of header and body (response.header
and response.body
).
Depending on success of request execution on server side, body may contain
either runtime error(s) (accessible by response.body.error_stack
) or data
(tuples) (response.body.data
). In turn, data is a vector of tuples. However,
tuples are not decoded and come in form of pointers to the start and end of
msgpacks. See section below to understand how to decode tuples.
Now let's consider a bit more sophisticated requests.
Assume we have a space with id = 512
and following format on the server:
CREATE TABLE t(id INT PRIMARY KEY, name TEXT, coef DOUBLE);
Preparing analogue of t:replace{1, "111", 1.01};
request can be done this way:
std::tuple data = std::make_tuple(1 /*id*/, "111" /*name*/, 1.01 /*coef*/);
rid_t my_replace = conn.space[512].replace(data);
As a good alternative, we could use structure instead of std::tuple, but we would have to provide once a way how it must be encoded:
struct UserTuple {
uint64_t id;
std::string name;
double coef;
static constexpr auto mpp = std::make_tuple(
&UserTuple::id, &UserTuple::name, &UserTuple::coef);
};
...
UserTuple tuple{.id = 1, .name = "aa", .coef = 1.01};
rid_t my_replace = conn.space[512].replace(data);
To execute select query t.index[1]:select({1}, {limit = 1})
:
auto i = conn.space[512].index[1];
rid_t my_select = i.select(std::make_tuple(1), 1, 0 /*offset*/, IteratorType::EQ);
Responses from server contain raw data (i.e. encoded into msgpuck tuples). To decode client's data, we have to provide user storage that implicitly describes tuple format. For example, we know that the space (and each tuple) has three fields: unsigned, string and number. Then std::tuple<uint64_t, std::string, double> can be used as complete storage for decoding tuples of such space. Since select returns a dynamic array of tuples, the storage also must be a dynamic array (for example, vector):
rid_t my_select = i.select(....);
// wait for response...
assert(conn.futureIsReady(my_select));
auto response = conn.getResponse(my_select);
std::vector<std::tuple<uint64_t, std::string, double>> results;
response.body.data.decode(results);
// use results...
std::tuple is good since it has clearly readable format, but common structures are much more convenient to use. To decode structures we have to declare their format for decoder:
struct UserTuple {
uint64_t id;
std::string name;
double coef;
static constexpr auto mpp = std::make_tuple(
&UserTuple::id, &UserTuple::name, &UserTuple::coef);
};
// Perform select and wait for result...
auto response = conn.getResponse(my_select);
std::vector<UserTuple> results;
response.body.data.decode(results);
// use results...
TODO