In this tutorial, we will learn how to do Optical Character Recognition by Camera in Android using Vision API. Here, we will just impor...

Optical Character Recognition By Camera Using Google Vision API On Android Optical Character Recognition By Camera Using Google Vision API On Android

Optical Character Recognition By Camera Using Google Vision API On Android

Optical Character Recognition By Camera Using Google Vision API On Android

ocr
In this tutorial, we will learn how to do Optical Character Recognition by Camera in Android using Vision API. Here, we will just import the Google Vision API Library with Android Studio and implement the OCR for retrieving text from camera preview.
You can find my previous tutorial on Optical Character Recognition using Google Vision API for Recognizing Text from Image in here. My previous tutorial covered the introduction about Google Vision API. Therefore, without any delay, we will skip into our coding part.

Coding Part:

Steps: 
I have split this part into four steps as in the following. 
Step 1: Creating New Project with Empty Activity and Gradle Setup. 
Step 2: Setting up Manifest for OCR. 
Step 3: Implementing Camera View using SurfaceView. 
Step 4: Implementing OCR in Application.

Step 1: Creating New Project with Empty Activity and Gradle Setup

We will start coding for OCR. Create New Android Project. Add the following line in your app level build.gradle file to import the library.
implementation 'com.google.android.gms:play-services-vision:15.2.0'
Step 2: Setting up Manifest for OCR
Open your Manifest file and add the following code block to instruct the app to install or download the dependencies at the time of installing the app.
<meta-data android:name="com.google.android.gms.vision.DEPENDENCIES" android:value="ocr"/>

Step 3: Implementing Camera View using SurfaceView

Open your activity_main.xml file and paste the following code. It just the designer part of the application.
<?xml version="1.0" encoding="utf-8"?>
<android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context="com.androidmads.ocrcamera.MainActivity">

    <SurfaceView
        android:id="@+id/surface_view"
        android:layout_width="match_parent"
        android:layout_height="match_parent" />

    <TextView
        android:id="@+id/txtview"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        app:layout_constraintBottom_toBottomOf="parent"
        android:text="No Text"
        android:textColor="@android:color/white"
        android:textSize="20sp"
        android:padding="5dp"/>

</android.support.constraint.ConstraintLayout>

Step 4: Implementing OCR in Application

Open your MainActivity.java file and initialize the widget used in your designer. Add the following code to start Camera View. 
  • Implement your Activity with SurfaceHolder.Callback, Detector.Processor to start your camera preview.
TextRecognizer txtRecognizer = new TextRecognizer.Builder(getApplicationContext()).build();
if (!txtRecognizer.isOperational()) {
    Log.e("Main Activity", "Detector dependencies are not yet available");
} else {
    cameraSource = new CameraSource.Builder(getApplicationContext(), txtRecognizer)
            .setFacing(CameraSource.CAMERA_FACING_BACK)
            .setRequestedPreviewSize(1280, 1024)
            .setRequestedFps(2.0f)
            .setAutoFocusEnabled(true)
            .build();
    cameraView.getHolder().addCallback(this);
    txtRecognizer.setProcessor(this);
}
Here, TextRecognizer is used to do Character Recognition in Camera Preview & txtRecognizer.isOperational() is used to check the device has the support for Google Vision API. The output of the TextRecognizer can be retrieved by using SparseArray and StringBuilder. 
TextBlock:
I have used TextBlock to retrieve the paragraph from the image using OCR.
Lines:
You can get the line from the TextBlock using
textblockName.getComponents()
Element:
You can get the line from the Lines using
lineName.getComponents()
Camera Source is starts on surface created with callback and do the scanning process. The Received Detections are read by SparseArray and is similar to read data with bitmap in android. The Text View in the bottom of screen used to preview the scanned data lively.

Full Code:

You can find the full code here.
public class MainActivity extends AppCompatActivity implements SurfaceHolder.Callback, Detector.Processor {

    private SurfaceView cameraView;
    private TextView txtView;
    private CameraSource cameraSource;

    @SuppressLint("MissingPermission")
    @Override
    public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
        switch (requestCode) {
            case 1: {
                if (grantResults[0] == PackageManager.PERMISSION_GRANTED) {
                    try {
                        cameraSource.start(cameraView.getHolder());
                    } catch (Exception e) {

                    }
                }
            }
            break;
        }
    }

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);
        cameraView = findViewById(R.id.surface_view);
        txtView = findViewById(R.id.txtview);
        TextRecognizer txtRecognizer = new TextRecognizer.Builder(getApplicationContext()).build();
        if (!txtRecognizer.isOperational()) {
            Log.e("Main Activity", "Detector dependencies are not yet available");
        } else {
            cameraSource = new CameraSource.Builder(getApplicationContext(), txtRecognizer)
                    .setFacing(CameraSource.CAMERA_FACING_BACK)
                    .setRequestedPreviewSize(1280, 1024)
                    .setRequestedFps(2.0f)
                    .setAutoFocusEnabled(true)
                    .build();
            cameraView.getHolder().addCallback(this);
            txtRecognizer.setProcessor(this);
        }
    }

    @Override
    public void surfaceCreated(SurfaceHolder holder) {
        try {
            if (ActivityCompat.checkSelfPermission(this,
                    Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {
                ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.CAMERA},1);
                return;
            }
            cameraSource.start(cameraView.getHolder());
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

    @Override
    public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {

    }

    @Override
    public void surfaceDestroyed(SurfaceHolder holder) {
        cameraSource.stop();
    }

    @Override
    public void release() {

    }

    @Override
    public void receiveDetections(Detector.Detections detections) {
        SparseArray items = detections.getDetectedItems();
        final StringBuilder strBuilder = new StringBuilder();
        for (int i = 0; i < items.size(); i++)
        {
            TextBlock item = (TextBlock)items.valueAt(i);
            strBuilder.append(item.getValue());
            strBuilder.append("/");
            // The following Process is used to show how to use lines & elements as well
            for (int j = 0; j < items.size(); j++) {
                TextBlock textBlock = (TextBlock) items.valueAt(j);
                strBuilder.append(textBlock.getValue());
                strBuilder.append("/");
                for (Text line : textBlock.getComponents()) {
                    //extract scanned text lines here
                    Log.v("lines", line.getValue());
                    strBuilder.append(line.getValue());
                    strBuilder.append("/");
                    for (Text element : line.getComponents()) {
                        //extract scanned text words here
                        Log.v("element", element.getValue());
                        strBuilder.append(element.getValue());
                    }
                }
            }
        }
        Log.v("strBuilder.toString()", strBuilder.toString());

        txtView.post(new Runnable() {
            @Override
            public void run() {
                txtView.setText(strBuilder.toString());
            }
        });
    }
}

Download Code:

You can download the full source code for this article from GitHub. If you like this article, do like the repo and share the article.

10 comments:

  1. Thank you so much for such a well-written article. It’s full of insightful information. Your point of view is the best among many without fail.For certain, It is one of the best blogs in my opinion. cheap cameras

    ReplyDelete
  2. Office cameras record every move made by employees, visitors and employers alike. There is constants surveillance on any kind of suspicious or dubious act.Reolink camera setup

    ReplyDelete
  3. A simple to utilize quality digital camera buy will be completely utilized by staff and understudies.best bridge camera

    ReplyDelete
  4. The battery charges in about 80 minutes, and the F181 comes standard with two batteries, which is great for when you won't want to stop flying after just 8 minutes of flight time. photo drone

    ReplyDelete
  5. The website is looking bit flashy and it catches the visitors eyes. Design is pretty simple and a good user friendly interface. CAMERA

    ReplyDelete
  6. is it possible for hindi text?

    ReplyDelete
  7. Notwithstanding, today, insurance agency are beginning to cover weight reduction surgery like never before.
    https://www.visualaidscentre.com/lasik-surgery-cost-india/

    ReplyDelete
  8. I wrote about a similar issue, I give you the link to my site. Camera replacement

    ReplyDelete

Please Comment about the Posts and Blog