Alex James
Boomi Oracle Database Connector- Difference between Commit By Profile and Commit By Number of Rows
Introduction
With Boomi, one can quickly integrate with Oracle database. Though both legacy database and Database V2 connectors support integration with Oracle DB, Oracle DB connector is built specifically for integration with Oracle DB supporting specific features such as Oracle Wallet/TCPS configuration.
Committing changes to a database is a very important step to ensure that the data is permanently saved since last commit. A clear understanding of the commit options is essential to ensure that the data is saved as desired and nothing is lost.
Both standard and dynamic INSERT/DELETE/UDPATE/UPSERT in Boomi Oracle Database connector operation has two commit options "Commit By Profile" and "Commit By Number of Rows"
I performed a proof-of-concept to understand the behavior of these two commit options for Oracle Database Connector
POC process
I created a Boomi Process and a table named Books with the following columns as part of my POC.

The below process inserts a list of Book titles in to the oracle table "BOOKS".

Titles List: Message shape to pass the list of titles to be inserted in to the database. I have three titles to be inserted and one of them, I kept it lengthy to make it fail during insertion because of "ORA-12899: value too large for column" error.

Oracle Database Insert Operation:
The insertion type is set to "Dynamic Insert". After an import is performed in the Oracle Database operation, request and response JSON profiles are created. When values are mapped from the source profile to the DB JSON profile, a single document is created for every row to be inserted. Say, if there are 10 rows to be inserted, 10 separate JSON documents are required to be sent to the connector.

Commit by Profile
"Commit By Profile" performs a commit at the document-level and is very specific for that documents. If an error occurs in a document, it impacts the specific document and the other rows will continue to process and will be committed if successful.
When executed the POC process with three titles with the second one with the lengthy title, the first and third titles are inserted successfully with the second one failing with a "ORA-12899: value too large for column" error.

First and third titles are inserted successfully as shown below.

Second title failed with an error and not inserted. However, this error didn't impact the insert of other two rows

Commit By Number of Rows
Commit By Number of Rows batches a specific number of rows as defined in the "Batch Count" and does a commit for the entire batch.

This commit setting adds all the SQL statements in a sequence to execute in a batch. If a SQL statement fails for a reason, all the subsequent statements in that batch are NOT inserted. All statements prior to the failed SQL are executed and committed.
In the scenario of titles insert, all rows will be added to the batch and the first title alone be inserted. With the second row going in to error, the third row though it is proper will not be inserted.
First record added to batch successfully:

Second record added to batch successfully:

Last record will be added and batch executed:

When queried against the DB for inserted rows, only the first title was inserted and any records after a failed record are ignored.

Additional Note
I personally felt that the Oracle Database connector has to be improved in the below two areas. Take the classic case of a Customer Purchase Order data with header/lines. A single PO document need to be mapped to a Header and Lines table.
When the line items are mapped to Lines DB table, I prefer to have a single DB JSON document with an array of lines so that it committed or rolled back for all the lines in that order. However, Boomi Oracle Database Connector don't have OOTB support for that and all lines are inserted separately. Any issue with one of the lines, all the related lines have to be deleted by a different connector operation.
Also there are use cases for parent child insert and transactional disparate table insert as well. If the insert fails in one of the tables, the complete transaction has to be rolled back to avoid orphan rows. A separate DELETE connector operation is required to clean up the orphan rows. If the above two use cases are important to you, consider using a Database V1 connector which has support for both use cases.
Please comment for any questions/queries or other inputs.