Showing posts with label OCR. Show all posts

Optical Character Recognition By Camera Using Google Vision API On Android

ocr
In this tutorial, we will learn how to do Optical Character Recognition by Camera in Android using Vision API. Here, we will just import the Google Vision API Library with Android Studio and implement the OCR for retrieving text from camera preview.
You can find my previous tutorial on Optical Character Recognition using Google Vision API for Recognizing Text from Image in here. My previous tutorial covered the introduction about Google Vision API. Therefore, without any delay, we will skip into our coding part.

Coding Part:

Steps: 
I have split this part into four steps as in the following. 
Step 1: Creating New Project with Empty Activity and Gradle Setup. 
Step 2: Setting up Manifest for OCR. 
Step 3: Implementing Camera View using SurfaceView. 
Step 4: Implementing OCR in Application.

Step 1: Creating New Project with Empty Activity and Gradle Setup

We will start coding for OCR. Create New Android Project. Add the following line in your app level build.gradle file to import the library.
implementation 'com.google.android.gms:play-services-vision:15.2.0'
Step 2: Setting up Manifest for OCR
Open your Manifest file and add the following code block to instruct the app to install or download the dependencies at the time of installing the app.
<meta-data android:name="com.google.android.gms.vision.DEPENDENCIES" android:value="ocr"/>

Step 3: Implementing Camera View using SurfaceView

Open your activity_main.xml file and paste the following code. It just the designer part of the application.
<?xml version="1.0" encoding="utf-8"?>
<android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context="com.androidmads.ocrcamera.MainActivity">

    <SurfaceView
        android:id="@+id/surface_view"
        android:layout_width="match_parent"
        android:layout_height="match_parent" />

    <TextView
        android:id="@+id/txtview"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        app:layout_constraintBottom_toBottomOf="parent"
        android:text="No Text"
        android:textColor="@android:color/white"
        android:textSize="20sp"
        android:padding="5dp"/>

</android.support.constraint.ConstraintLayout>

Step 4: Implementing OCR in Application

Open your MainActivity.java file and initialize the widget used in your designer. Add the following code to start Camera View. 
  • Implement your Activity with SurfaceHolder.Callback, Detector.Processor to start your camera preview.
TextRecognizer txtRecognizer = new TextRecognizer.Builder(getApplicationContext()).build();
if (!txtRecognizer.isOperational()) {
    Log.e("Main Activity", "Detector dependencies are not yet available");
} else {
    cameraSource = new CameraSource.Builder(getApplicationContext(), txtRecognizer)
            .setFacing(CameraSource.CAMERA_FACING_BACK)
            .setRequestedPreviewSize(1280, 1024)
            .setRequestedFps(2.0f)
            .setAutoFocusEnabled(true)
            .build();
    cameraView.getHolder().addCallback(this);
    txtRecognizer.setProcessor(this);
}
Here, TextRecognizer is used to do Character Recognition in Camera Preview & txtRecognizer.isOperational() is used to check the device has the support for Google Vision API. The output of the TextRecognizer can be retrieved by using SparseArray and StringBuilder. 
TextBlock:
I have used TextBlock to retrieve the paragraph from the image using OCR.
Lines:
You can get the line from the TextBlock using
textblockName.getComponents()
Element:
You can get the line from the Lines using
lineName.getComponents()
Camera Source is starts on surface created with callback and do the scanning process. The Received Detections are read by SparseArray and is similar to read data with bitmap in android. The Text View in the bottom of screen used to preview the scanned data lively.

Full Code:

You can find the full code here.
public class MainActivity extends AppCompatActivity implements SurfaceHolder.Callback, Detector.Processor {

    private SurfaceView cameraView;
    private TextView txtView;
    private CameraSource cameraSource;

    @SuppressLint("MissingPermission")
    @Override
    public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
        switch (requestCode) {
            case 1: {
                if (grantResults[0] == PackageManager.PERMISSION_GRANTED) {
                    try {
                        cameraSource.start(cameraView.getHolder());
                    } catch (Exception e) {

                    }
                }
            }
            break;
        }
    }

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);
        cameraView = findViewById(R.id.surface_view);
        txtView = findViewById(R.id.txtview);
        TextRecognizer txtRecognizer = new TextRecognizer.Builder(getApplicationContext()).build();
        if (!txtRecognizer.isOperational()) {
            Log.e("Main Activity", "Detector dependencies are not yet available");
        } else {
            cameraSource = new CameraSource.Builder(getApplicationContext(), txtRecognizer)
                    .setFacing(CameraSource.CAMERA_FACING_BACK)
                    .setRequestedPreviewSize(1280, 1024)
                    .setRequestedFps(2.0f)
                    .setAutoFocusEnabled(true)
                    .build();
            cameraView.getHolder().addCallback(this);
            txtRecognizer.setProcessor(this);
        }
    }

    @Override
    public void surfaceCreated(SurfaceHolder holder) {
        try {
            if (ActivityCompat.checkSelfPermission(this,
                    Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {
                ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.CAMERA},1);
                return;
            }
            cameraSource.start(cameraView.getHolder());
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

    @Override
    public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {

    }

    @Override
    public void surfaceDestroyed(SurfaceHolder holder) {
        cameraSource.stop();
    }

    @Override
    public void release() {

    }

    @Override
    public void receiveDetections(Detector.Detections detections) {
        SparseArray items = detections.getDetectedItems();
        final StringBuilder strBuilder = new StringBuilder();
        for (int i = 0; i < items.size(); i++)
        {
            TextBlock item = (TextBlock)items.valueAt(i);
            strBuilder.append(item.getValue());
            strBuilder.append("/");
            // The following Process is used to show how to use lines & elements as well
            for (int j = 0; j < items.size(); j++) {
                TextBlock textBlock = (TextBlock) items.valueAt(j);
                strBuilder.append(textBlock.getValue());
                strBuilder.append("/");
                for (Text line : textBlock.getComponents()) {
                    //extract scanned text lines here
                    Log.v("lines", line.getValue());
                    strBuilder.append(line.getValue());
                    strBuilder.append("/");
                    for (Text element : line.getComponents()) {
                        //extract scanned text words here
                        Log.v("element", element.getValue());
                        strBuilder.append(element.getValue());
                    }
                }
            }
        }
        Log.v("strBuilder.toString()", strBuilder.toString());

        txtView.post(new Runnable() {
            @Override
            public void run() {
                txtView.setText(strBuilder.toString());
            }
        });
    }
}

Download Code:

You can download the full source code for this article from GitHub. If you like this article, do like the repo and share the article.

Optical Character Recognition using Google Vision API on Android




In this tutorial, we will learn how to do Optical Character Recognition in Android using Vision API. Here, we will just import the Google Vision API Library with Android Studio and implement the OCR for retrieving text from image.

Android Mobile Vision API:

The Mobile Vision API provides a framework for finding objects in photos and video. The framework includes detectors, which locate and describe visual objects in images or video frames, and an event driven API that tracks the position of those objects in video. The Mobile Vision API includes face, bar code, and text detectors, which can be applied separately or together. 
This is not only used to get text from image as well as for structuring the text retrieved. It will divide the captured text in the following categories.
  • TextBlock - In this category, the scanned paragraph is captured.
  • Line - In this category, the line of text captured from Textblock takes place.
  • Element- In this category, the word captured from line takes place.

Coding Part:

Step 1
We will start coding for OCR. Create New Android Project. Add the following line in your app level build.gradle file to import the library.

For Android Studio before 3.0
compile'com.google.android.gms:play-services-vision:11.8.0'
From Android Studio 3.0
implementation 'com.google.android.gms:play-services-vision:11.8.0'
Step 2
Open your Manifest file and add the following code block to instruct the app to install or download the dependencies at the time of installing the app.
<meta-data android:name="com.google.android.gms.vision.DEPENDENCIES"
android:value="ocr"/>
Step 3
Open your activity_main.xml file and paste the following code. It just the designer part of the application.
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:orientation="vertical"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:padding="15dp">
    <ImageView
        android:id="@+id/image_view"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:scaleType="centerInside" />
    <Button
        android:id="@+id/btnProcess"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:text="Process" />
    <TextView
        android:id="@+id/txtView"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:text="No Text"
        android:layout_gravity="center"
        android:textSize="25sp" />
</LinearLayout>
Open your MainActivity.java file and initialize the widget used in your designer. Add the following code to start Optical Character Recognition.
// To get bitmap from resource folder of the application.
bitmap = BitmapFactory.decodeResource(getApplicationContext().getResources(), R.drawable.ocr_sample);
// Starting Text Recognizer
TextRecognizer txtRecognizer = new TextRecognizer.Builder(getApplicationContext()).build();
if (!txtRecognizer.isOperational())
{
        // Shows if your Google Play services is not up to date or OCR is not supported for the device
 txtView.setText("Detector dependencies are not yet available");
}
else
{
 // Set the bitmap taken to the frame to perform OCR Operations.
        Frame frame = new Frame.Builder().setBitmap(bitmap).build();
 SparseArray items = txtRecognizer.detect(frame);
 StringBuilder strBuilder = new StringBuilder();
 for (int i = 0; i < items.size(); i++)
 {
  TextBlock item = (TextBlock)items.valueAt(i);
  strBuilder.append(item.getValue());
  strBuilder.append("/");
                // The following Process is used to show how to use lines & elements as well
                for (int i = 0; i < items.size(); i++) {
                        TextBlock item = (TextBlock) items.valueAt(i);
                        strBuilder.append(item.getValue());
                        strBuilder.append("/");
                        for (Text line : item.getComponents()) {
                            //extract scanned text lines here
                            Log.v("lines", line.getValue());
                            for (Text element : line.getComponents()) {
                                //extract scanned text words here
                                Log.v("element", element.getValue());
                            }
                        }
                    }
 }
 txtView.setText(strBuilder.toString());
}
txtRecognizer.isOperational() is used to check the device has the support for Google Visison API. The output of the TextRecognizer can be retrieved by using SparseArray and StringBuilder.

TextBlock:

I have used TextBlock to retrieve the paragraph from the image using OCR.

Lines:

You can get the line from the TextBlock using
textblockName.getComponents()

Element:

You can get the line from the TextBlock using
lineName.getComponents()

Demo:

The output of this app is

Download Code:

You can the download code for this post from Github. If you like this, tutorial star it on Github.