Developer Docs

Modifying the Import/Export features in Springbot

Please note: The import feature is only available for SpringBot applications running version 1.3.0.0 or later. When rewriting your application with the bot, these features will be added to all Data Table tiles in your application. C#Bot applications will be receiving this feature soon.

Using Import/Export with Data Table features are avaiable in SpringBot applications. They come with any Data table tile in your application and can be modified using protected regions in order to help them better suit the specific requirements of your application. The Import and Export functionality were created using a library called OpenCSV.

Modifying Attributes in Import/Export

When importing and exporting information in the database, the attributes of entities need to be mapped to headers in the CSV files. These mappings are done using annotations which are added to attributes in the entity classes in the server-side.

In order to modify columns in your CSV, as well as whether or not the value is required, you can modify the CsvCustomBindByName or CsvBindByName annotations, one of which will be present on any attribute which is included in the CSV file.

// % protected region % [Modify attribute annotation for Name here] off begin
@CsvBindByName(column = "NAME", required = true)
@NotNull(message = "Name must not be empty")
@Column(name = "name")
@ApiModelProperty(notes = "The Name of this entity.")
@ToString.Include
// % protected region % [Modify attribute annotation for Name here] end
private String name;

// % protected region % [Modify attribute annotation for Date of Birth here] off begin
@CsvCustomBindByName(column = "DATE_OF_BIRTH", converter = DateTimeConverter.class, required = false)
@Nullable
@Column(name = "date_of_birth")
@JsonDeserialize(using = DateTimeDeserializer.class)
@ApiModelProperty(notes = "The Date of Birth of this entity.")
@ToString.Include
// % protected region % [Modify attribute annotation for Date of Birth here] end
private OffsetDateTime dateOfBirth;

Please note that modifying the column names will affect the names of the headers which are exported, and it will affect the names of headers which will be mapped to attributes when importing. Therefore, if you are going to modify these, you will also need to modify the headers in the getExampleCsvHeader() function for that entity.

public static String getExampleCsvHeader() {

    // % protected region % [Modify the headers in the CSV file here] off begin
    return "NAME,DATE_OF_BIRTH,ALIVE,SPECIES_ID,TANK_ID,ID";
    // % protected region % [Modify the headers in the CSV file here] end
}

These headers are used for creating the example CSV which is downloaded when the user clicks that button, and it is also used to validate that all of the headers in an imported CSV are valid, as the endpoints are configured to reject a CSV which contains a header not present in this list.

You can also add additional columns to your CSV’s by adding an additional attribute to the entity class in the Add any additional class fields here protected region. However any attributes you add will need to be annotated with one of the Csv binding annotations (CsvCustomBindByName or CsvBindByName).

Modifying Parsing of Data

There are two methods which are used to parse the data given in a CSV import.

The first is a default method used by the OpenCSV library, and is sufficient for parsing standard data types such as strings, integers and booleans etc.

Other data types such as UUID’s, Datetimes and Enum Literals are much more likely to lead to parsing errors, so we need to define how they will be parsed when they are included in a CSV. The way this is done is by using converters. The converters can be found at serverside/src/main/java/[project name]/services/utils/converters.

Converters

Converters take a string input, which would have been given by the entity in the CSV, and turn that value into a variable of the specified type.

The example above shows how you can use the CsvCustomBindByName annotation to determine which converter will be used to parse that attribute in the CSV file. An example of a converter used to convert a string input to a UUID object can be seen here:

import com.opencsv.bean.AbstractBeanField;
import com.opencsv.exceptions.CsvDataTypeMismatchException;

import java.util.UUID;

public class UUIDConverter extends AbstractBeanField<UUID, UUID> {

    @Override
    protected Object convert(String idInput) throws CsvDataTypeMismatchException {
        UUID id;

        if (idInput == null || idInput.matches("")) {
            return null;
        }

        try {
            id = UUID.fromString(idInput);
        } catch (IllegalArgumentException e) {
            throw new CsvDataTypeMismatchException(idInput + " must be a valid UUID");
        }

        return id;
    }
}

As you can see, we first handle the situation where the input is null or empty. While these inputs would throw an error in the conversion to UUID, these are valid inputs to the converter, and as such they must be supported. We also throw a CsvDataTypeMismatchException when the format is incorrect. The reason for this is that the parsing stage in the Import endpoint will catch these errors, and return them to the client-side.

If you have a particular format you would like your imported attributes to be parsed into, you can create a converter which will ensure that they are given in this format, and either update the input to the correct format before proceeding, or throw an error to be returned to the client-side so that the user can make changes to their CSV.

Configuring input to allow Excel files

Importing excel files is not currently supported in bot-written code, however there is a path for you to implement it yourself if CSV imports are not sufficient for your requirements.

Creating the endpoint in the controller can be implemented in a similar way to the Import endpoints. The file import logic would work very similarly, as the endpoint could be set up to accept a file in the same way. The logic used later in this function which validates the entities which are being created would also be very similar, with the primary difference being the method used to parse the entities.

Parsing will most likely need to be implemented using an external library, or can be implemented yourself. Any new endpoints which you create for this can be added to the Add any additional endpoints here protected region in the controller files.

@PreAuthorize("hasPermission('TankEntity', 'create')  || hasPermission('TankEntity', 'update')")
@PostMapping(value = "/import-csv")
public ResponseEntity<String> csvUpload(HttpServletRequest request,
                                        @ApiParam("The CSV file with TankEntity entities to import")
                                        @RequestParam("file") MultipartFile file) throws IOException {
    ...

    BufferedReader reader = new BufferedReader(new InputStreamReader(file.getInputStream()));

    ...

}

In order to implement these changes in the client-side, you need to overwrite the URL of the endpoint which the Import function is trying to access. This can be done in the service functions for the entities you want to implement the change for, in the Overwrite the url of the import api here protected region.

If you have set up another endpoint which implements excel imports, then the only changes required in the client-side will be changing the endpoint URL in the service, and updating the accepted file types in the file input to allow .xlsx files. This can be accomplished by turning on the Customise your collection here protected region and adding [importAcceptedFileTypes]="'.xslx'" to the inputs of the cb-collection component.

Was this article helpful?

Thanks for your feedback!

If you would like to tell us more, please click on the link below to send us a message with more details.

Tool:

Generate

Iterate

Bot:

C#Bot

SpringBot

On this page

New to Codebots?

We know our software can be complicated, so we are always happy to have a chat if you have any questions.