SAP Cloud Integration Platform (CPI) Part 17 – Batch Processing in HTTP Adapter

In this post, we will explore batch Processing in HTTP Adapter and learn how to develop a batch payload and process multiple items/tasks through the HTTP adapter.

Overview of this blog series: Batch Processing in HTTP Adapter

1. SAP CPI Introduction
2. SAP BTP tools and features overview (BTP, Global Account, Sub-Account, Entitlements, Connectivity, Security)
3. SAP CPI Web IDE overview
4. Registering a trial account and enrolling to SAP CPI service
5. Deep dive into Cloud Integration features with real world scenario example
6. Use cases of palette functions
7. Adapter configurations
8. Using Cloud connector for connecting to backend SAP systems
9. Overview on API Management & Open Connectors

Integration using Open Connectors with real world example

In short, below is the content we will elaborate in this tutorial:

1. Traditional Approach vs Batch Processing
2. Terms to know – Batch and Changeset
3. HTTP Adapter config for batch processing
4. Structure of a Batch Payload


SAP Cloud Stage Mix (CPI) is a cloud-based combination arrangement that permits associations to interface their on-premises and cloud applications, information sources, and APIs. With SAP CPI, associations can computerize their business cycles and trade information continuously between various frameworks. One of the fundamental highlights of SAP CPI is bunch handling, which permits clients to effectively deal with huge volumes of information. In this blog entry, we will talk about how to utilize the group handling component of the HTTP connector in SAP CPI.

1. Traditional Approach vs Batch Processing

With regards to handling information in SAP CPI, there are two primary methodologies: the customary methodology and the clump handling approach.

Traditional Approach:

The customary methodology includes handling information each record in turn. In this methodology, each record is handled exclusively, and the reconciliation stream sends a reaction to the shipper after each record is handled. This approach is reasonable for little to medium-sized informational indexes, however it tends to be wasteful for enormous volumes of information.

Batch Processing Approach:

Bunch handling is a component that permits clients to bunch various exchanges into a solitary unit of work. With cluster handling, clients can handle huge volumes of information in a solitary activity, which can further develop execution and lessen network traffic. The cluster handling approach is reasonable for handling huge volumes of information, like information movement or information synchronization situations.

Examination among Customary and Clump Handling Approach

  • Handling Time: In the conventional methodology, information is handled each record in turn, which can get some margin for enormous informational collections. Conversely, bunch handling permits clients to deal with huge volumes of information in a solitary activity, which can fundamentally decrease handling time.
  • Network Traffic: In the conventional methodology, the coordination stream sends a reaction to the shipper after each record is handled. This can create a ton of organization traffic, particularly for huge informational collections. With group handling, clients can deal with various records in a solitary activity, which can essentially lessen network traffic.
  • Blunder Taking care of: In the customary methodology, mistakes can happen for each record handled, and the reconciliation stream should deal with every blunder separately. Interestingly, with cluster handling, mistakes can be taken care of all the more proficiently. On the off chance that a solicitation falls flat, the whole group can be moved back, which guarantees information consistency.
  • Worked on Incorporation: Clump handling improves on reconciliation by decreasing the quantity of HTTP calls expected to handle information. Interestingly, the conventional methodology requires a different HTTP require each record handled.

The HTTP and OData connector in SAP CPI upholds clump handling, which permits clients to send different solicitations in a solitary call. In this blog entries, we will perceive how to develop a clump payload and post it through HTTP connector. In the approaching posts, we will perceive how to post it by means of OData connector.

To utilize clump handling in the HTTP connector, you want to make a bunch demand payload. The bunch demand payload contains different HTTP demands, each with its own HTTP strategy, URL, and headers. The group demand payload is shipped off the HTTP connector, which processes each solicitation and returns a clump reaction payload. The group payload is shipped off the HTTP connector utilizing the “multipart/blended” content sort. The substance type determines a limit boundary that isolates each piece of the clump payload. The limit boundary is a special string that is utilized to delimit each piece of the group payload.

2. Terminologies to know – Batch and Changeset

Prior to perceiving how to develop a clump payload, there are sure phrasings that should be known: Cluster and Changeset.

Cluster:

The expression “cluster” in the clump payload alludes to the assortment of numerous solicitations that are assembled together. A cluster payload should begin with “- clump” watchword with 2 dashes first and foremost. What’s more, eventually, it ought to end with “- group — “.

Changeset:

A “changeset” is a gathering of related HTTP demands that are handled as a solitary exchange. changeset permits various solicitations to be gathered, so they can be handled as a solitary unit of work, guaranteeing information consistency.

A changeset is a piece of a cluster demand that contains an assortment of related HTTP demands. These solicitations can be of various sorts, for example, GET, POST, PUT, Erase, and so on, and can be shipped off something very similar or various endpoints. Each solicitation inside a changeset should have an interesting Substance ID, which is utilized to distinguish the solicitation inside the changeset. At the point when a bunch demand containing a changeset is shipped off the HTTP connector, the connector processes the solicitations inside the changeset as a solitary exchange. If any of the solicitations inside the changeset come up short, the whole changeset is moved back, guaranteeing information consistency.

Changesets are helpful when different solicitations should be handled as a solitary exchange, like refreshing numerous records in a data set. By gathering the solicitations in a changeset, clients can guarantee that all updates are applied reliably, and any mistakes are dealt with proficiently.

3. HTTP Adapter Config for Batch Processing

Prior to seeing the construction of the bunch payload, how about we perceive how the HTTP connector config ought to seem to be:

Two changes are required:

  • Add the clump watchword toward the finish of the URL in HTTP connector setting under Association’s tab “/$batch”.
  • Technique ought to be dependably POST. While utilizing bunch handling, all solicitations inside a clump are sent as a solitary HTTP POST solicitation to the objective endpoint. The justification behind utilizing the HTTP technique POST for cluster handling is that it considers the transmission of a lot of information in the solicitation body. While sending numerous solicitations as a bunch, the size of the payload can turn out to be very huge. Utilizing POST takes into consideration bigger payloads to be sent than other HTTP strategies like GET, which has a lot more modest payload limit. Moreover, utilizing the HTTP technique POST likewise considers demand/reaction message headers and message bodies to be remembered for a similar message. This works on the execution of clump handling, as all solicitations can be remembered for a solitary message, making it simpler to oversee and deal with the reactions. One more benefit of utilizing the HTTP strategy POST for clump handling is that it considers the utilization of confirmation and approval headers, which can guarantee that main approved clients can get to the information inside the cluster.

4. Structure of a Batch Payload

We should see the design of the clump payload

--batch_name
Content-Type: multipart/mixed; boundary=changeset_Name
{{1 line space}}
--changeset_Name
Content-Type: application/http
Content-Transfer-Encoding: binary
{{1 line space}}
METHOD Entity_Name(KEY FIELD) HTTP/1.1
Content-Type: application/json
If-Match: xxxxxxx
{{1 line space}}
{
{{Payload body in JSON format}}
}
{{1 line space}}
--Changeset_Name—
{{1 line space}}
--batch_name--

Points to Note:-

  • All group solicitations ought to start with bunch and end with clump, as expressed prior. There ought to be just a single group in a solicitation.
  • In the following line, Content sort and the changeset name ought to be pronounced. Changeset name is client explicit. Furthermore, a bunch solicitation can contain ‘n’ number of changesets (1:n). One changeset can contain different activities inside (like GET, PUT, Erase – 1:n).
  • The spaces referenced in the above structure are obligatory. On the off chance that there is no space, while executing, you might get a blunder as “bunch demand contains deformed language structure”.
  • Give the name which you announced before, which denotes the start of the changeset. Close to that, give the substance type and content exchange encoding.
  • Leave a line and give the HTTP technique followed by substance set name.
  • Content sort and IF-Match is required. For IF-Match, we really want to get the etag. This should be possible by doing a GET call preceding this payload step by passing the vital element and key field. For POST/PUT/GET activity, In the event that Match isn’t required. For Fix/Erase, its required.
  • Leave a line space and pass the payload in JSON design.
  • At long last Close the changeset and clump.

Sample batch payload:

Underneath payload will have 2 changesets with various substances.

--batch
Content-Type: multipart/mixed; boundary=changeset1

--changeset1
Content-Type: application/http
Content-Transfer-Encoding: binary

PATCH A SlsPrcgConditionRecord('0000008408') HTTP/1.1
Content-Type: application/json
If-Match: W/"'ABC064CBD736CA512B19A82EA83624047B894E'"

{"ConditionRecord":"0000008408","ConditionTable":"728","ConditionType":"PR00","ConditionValidityEndDate":"9999-12-31T12:12:00","ConditionValidityStartDate":"2020-01-01T12:01:00","ConditionRateValue":"2000","ConditionIsDeleted":false,"PaymentTerms":"0002","AdditionalValueDays":"23","ConditionRateValueUnit":"EUR","ETag":"*"}

--changeset1--

--batch
Content-Type: multipart/mixed; boundary=changeset_2

--changeset_2
Content-Type: application/http
Content-Transfer-Encoding: binary

PATCH A_SlsPrcgCndnRecdValidity(ConditionRecord='0000008408',EndDate='9999-12-31') HTTP/1.1
Content-Type: application/json
If-Match: W/"'CDEF5EB296ED973B62E5E283AFC45E249B492E1A'"

{
"ConditionValidityEndDate": "9999-12-31T12:12:00",
"ConditionValidityStartDate": "2020-01-01T12:01:00"
}

--changeset_2--
--batch--

Group Reaction:

In the group reaction, the HTTP status code will be returned overall and furthermore per changeset. At the point when a group demand is shipped off the objective endpoint, each solicitation is handled independently, and a reaction is created for each solicitation. These singular reactions are then consolidated into a solitary cluster reaction, which is gotten back to the shipper.

Assuming all solicitations inside the cluster are effectively handled, the objective endpoint produces individual reactions with HTTP status codes of 200 for each solicitation. These singular reactions are then joined into a solitary cluster reaction with a status code of 200, it were fruitful to demonstrate that all solicitations.

In the event that any of the solicitations inside the cluster fall flat, the objective endpoint produces a singular reaction with a HTTP status code showing the mistake, for example, 400 for a terrible solicitation or 500 for an inward server blunder. The singular reactions are then joined into a solitary cluster reaction, which incorporates the bombed demands and their comparing mistake codes.

Here is the cluster reaction of the above clump payload posted through HTTP connector:

--9CA6478CBDFC8529E729A34C373466F20
Content-Type: multipart/mixed; boundary=9CA6478CBDFC8529E729A34C373466F21
Content-Length:          238

--9CA6478CBDFC8529E729A34C373466F21
Content-Type: application/http
Content-Length: 71
content-transfer-encoding: binary

HTTP/1.1 204 No Content
Content-Length: 0
dataserviceversion: 2.0

--9CA6478CBDFC8529E729A34C373466F21--

--9CA6478CBDFC8529E729A34C373466F20
Content-Type: multipart/mixed; boundary=9CA6478CBDFC8529E729A34C373466F21
Content-Length:          238

--9CA6478CBDFC8529E729A34C373466F21
Content-Type: application/http
Content-Length: 71
content-transfer-encoding: binary

HTTP/1.1 204 No Content
Content-Length: 0
dataserviceversion: 2.0


--9CA6478CBDFC8529E729A34C373466F21--

--9CA6478CBDFC8529E729A34C373466F20--

The above information is effectively refreshed. Consequently, 204 No Happy status is returned.

Advantages of using individual changeset for each operation:

Utilizing individual changesets makes it simpler to oversee and keep up with the bunch demand. Each changeset can be handled freely, which improves on the rationale and lessens the intricacy of the clump demand. Assuming any activity comes up short, that will influence that changeset alone, and rest of the changesets will get handled effectively.

Error case in batch processing:

Here is the bunch reaction where one of the changesets fizzled as a result of the information issue. HTTP status code for the cluster cycle will be 200 alright, though issue changeset will have 400 Awful Solicitation.

--EC1566AF3097B912CE85633F6FAB238C0
Content-Type: multipart/mixed; boundary=EC1566AF3097B912CE85633F6FAB238C1
Content-Length: 238
--EC1566AF3097B912CE85633F6FAB238C1
Content-Type: application/http
Content-Length: 71
content-transfer-encoding: binary

HTTP/1.1 204 No Content
Content-Length: 0
dataserviceversion: 2.0

--EC1566AF3097B912CE85633F6FAB238C1--

--EC1566AF3097B912CE85633F6FAB238C0
Content-Type: multipart/mixed; boundary=EC1566AF3097B912CE85633F6FAB238C1
Content-Length: 238

--EC1566AF3097B912CE85633F6FAB238C1
Content-Type: application/http
Content-Length: 71
content-transfer-encoding: binary

HTTP/1.1 204 No Content
Content-Length: 0
dataserviceversion: 2.0

--EC1566AF3097B912CE85633F6FAB238C1--

--EC1566AF3097B912CE85633F6FAB238C0
Content-Type: application/http
Content-Length: 1908
content-transfer-encoding: binary

HTTP/1.1 400 Bad Request
Content-Type: application/xml;charset=utf-8
Content-Length: 1788
dataserviceversion: 1.0

<?xml version="1.0" encoding="utf-8"?><error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"><code>VK/047</code><message xml:lang="en">Amount 7.000,00 EUR must be smaller than the preceding value 6.000,00 EUR</message><innererror><application><component_id>SD-MD-CM</component_id><service_namespace>/SAP/</service_namespace><service_id>API_SLSPRICINGCONDITIONRECORD_SRV</service_id><service_version>0001</service_version></application><transactionid>ef74e1e2a20e46f0a20f23fb71d78168</transactionid><timestamp>20230415152139.3949610</timestamp><Error_Resolution><SAP_Transaction>For backend administrators: use ADT feed reader "SAP Gateway Error Log" or run transaction /IWFND/ERROR_LOG on SAP Gateway hub system and search for entries with the timestamp above for more details</SAP_Transaction><SAP_Note>See SAP Note 1797736 for error analysis (https://service.sap.com/sap/support/notes/1797736)</SAP_Note><Batch_SAP_Note>See SAP Note 1869434 for details about working with $batch (https://service.sap.com/sap/support/notes/1869434)</Batch_SAP_Note></Error_Resolution><errordetails><errordetail><ContentID/><code>VK/047</code><message>Amount 7.000,00 EUR must be smaller than the preceding value 6.000,00 EUR</message><propertyref/><severity>error</severity><target/><transition>false</transition></errordetail><errordetail><ContentID/><code>/IWBEP/CX_MGW_BUSI_EXCEPTION</code><message>Exception raised without specific error</message><propertyref/><severity>error</severity><target/><transition>false</transition></errordetail><errordetail><ContentID/><code>/IWBEP/CX_MGW_BUSI_EXCEPTION</code><message>An exception was raised</message><propertyref/><severity>error</severity><target/><transition>false</transition></errordetail></errordetails></innererror></error>
--EC1566AF3097B912CE85633F6FAB238C0--

I trust it was a good learning experience to understand batch processing in HTTP Adapter! In the next post, we will explore batch processing in the OData connector, delving into how it compares and contrasts with HTTP Adapter batch processing.

 

YOU MAY BE INTERESTED IN

Parallel cursor in SAP ABAP

Do Amazon hire SAP consultants?

How to Create RAP business events in SAP BTP ABAP Environment ?

WhatsApp WhatsApp us