Medical prescription OCR
Reading this article you'll be able to build an OCR API that extracts data from medical prescriptions using our deep learning engine.
- You’ll need to sign up to get a free account.
- You’ll need at least 20 different medical prescription images or pdfs to train your OCR model
Define your medical prescription use case
First, we’re going to define what fields we want to extract from your medical prescription.
Here is the list of fields we are going to extract using our OCR API:
- Doctor's name
- Patient's name
- Address: Of the medical center
- Phone number: Of the medical center
That’s it for our use case. Feel free to add any other relevant data to fit your requirements.
Deploy your API
Once you have defined what fields you want to extract, head over to the platform and press the ‘Create a new API’ button.
You land now on the setup page. You can use any image you want for setting up the API, my setup looks like this:
Once you’re ready, click on the “next” button. We are going to specify the data types for each of the fields we want our API to extract.
To go further, you can download this json config to set up your data model or do it manually.
Date: in European format
Doctor's name: A name never contains numeric characters
Patient's name: A name never contains numeric characters
Medical center's address: an address may contain both numeric and alpha characters
Medical center's phone number:
Once you’re done setting up your data model, press the Start training your model button at the bottom of the screen.
Train your medical prescription OCR
You’re all set!
Now is the time to train your medical prescription deep learning model in the Training section of our API. You will soon be able to use your custom OCR API for medical prescriptions! When you will reach at least 20 documents to train your model you will receive a confirmation email allowing your model to make predictions.
To get more information about the training phase, please refer to the getting started tutorial. If you have any questions regarding your use case, feel free to reach out on our chat!