35up offers the possibility of sending requests to predefined endpoints in reaction to events when they happen in the platform. Using this strategy, the cronjobs and other “pull” strategies — often expensive and inevitably delayed in time — can be replaced by “push” strategies that process the necessary data in near real-time and are much less resource-intensive.
For example, in order to detect order status changes, a traditional “pull” strategy would be implemented by regularly querying the GET /v1/orders/:order-id endpoint every X minutes/hours interval. As the number of orders accumulates, this cronjob could quickly result in hundreds of API calls on every run, potentially running for a long time, many times per day. This puts a lot of strain on both client and server systems, even it is clear that several of the references are not expected to be updated anytime soon, sometimes for a few days.
At the same time, the “pull” strategy is not very reliable, as it will always be delayed in time potentially for as much as the interval chosen to run the cronjob. If it runs once per day, the data update will potentially arrive 24h later. Also, when not implemented very carefully, cronjob batch processing can very easily lead to common errors, specially for long-running ones, such as abruptly terminating the process before the queue can reach the end or race conditions leading to deadlocks and duplicated operations.
Using a “push” strategy, both systems can leverage from a fully event-driven approach and handle the requests as they happen, only when they happen, as soon as they happen. The system requirements in this case are very different from the “pull” strategy, requiring the client (the seller) to keep a service permanently running and accepting requests coming from the server (35up). It also requires that the server can react and send predefined requests when events happen.
All the necessary requirements on 35up side are already fulfilled. Our systems are composed by distributed, event-driven microservices, only requiring the externalization of the important events to the client. On the client side, bringing a service up and running, accepting external connections should be an extremely easy task using virtually any programming language and framework existing in the market. In fact, it is estimated that such effort is nearly the same, if not easier, than composing a reliable set of cronjobs to perform the same operations, while also being substantially cheaper.
The webhook system running on the 35up platform is currently in “Beta” version and not publicly available, but sellers wanting to use it may request to join the beta program.
During the testing phase, only a limited number of events are available and there’s no user-friendly interface. In fact, the requests are configured by 35up’s Engineers directly according to the sellers’ specifications and the systems’ current capabilities.
The webhook endpoints on the sellers side are expected to be running 24h per day, every day continuously since the moment the webhook is configured. The webhook service hosted by the client should follow the requirements below:
An important warning: the events system is designed to follow an “at-least-once, entity-ordered” delivery. In an “at-least-once” approach, the client’s system should be able to tolerate duplicated events. Although rare, duplicated requests about the same event can still happen and, therefor, handling of such events must be done in an idempotent manner. By “entity-ordered” it means that the events about the same base entity id are delivered in chronological order, which does not mean that all events are globally chronological.
The constraints above favor a simpler implementation on both the client and server sides, releasing the need of dealing with multiphase commits, acknowledgment protocols and distributed transactions, as long as all events are effectively delivered at least once and idempotency of the operations is always observed.
Events in the 35up platform are named using reverse-domain namespace starting with the io.tfup prefix, followed by the scope/audience “subdomain”, entity (and sub-entities) and event type. New scopes/audiences, entities/sub-entities and event types are expected to be added in the future.
The following events are currently available as triggers for webhook requests containing updates for the different entities used to process a customer order:
In addition to the above, the following inventory event may be interesting to react to product availability changes. Exposing the event below is considered experimental. The number of pushed events can be very (we mean VERY) numerous. Because of that, only the SKUs included in seller-specific product selections (such as custom categories) will be included in outgoing webhook requests:
The documentation is under development and will be published soon. The data in the request body is JSON-encoded.
The payload of each webhook request will include at very least the ****id field of the entity in which the event occurred and its current status.
During the beta program, the payload of webhook requests can be tailored to the sellers’ needs using any of the fields available under the entity in which the event occurred. More details about the available field in the upcoming documentation.
An exception to this rule are the inventory-related events, which can currently only contain the product SKU and the affected fields (price or availability).
Some examples: