Summary

Industry: Drone inspection
Location: San Jose, California, the U.S.
Partnership period: The first phase took place in 2019, the second phase of the development effort started in October 2020 and is currently ongoing
Team size: 6 experts
Software product: A web tool with AI/ML functionality
Expertise delivered: Custom Software Development, Quality Assurance, AI Solutions Development

Challenge

Our client is a California-based startup in the drone inspection industry that offers its clients an AI platform for inspection, storage, and management of industrial assets. The solution can be used by both individual inspectors and companies of any size.

Prior to the project’s commencement, we had been cooperating with the client for over a decade as part of another project and built a strong relationship with them. Our client was interested in a new industry and decided to create a startup that would be able to introduce a Web-based smart drone inspection solution with Artificial Intelligence and Machine Learning functionality.

One of the client’s Executives became the startup’s CEO. They hired industry experts as advisors and engaged SPD Technology as a proven IT provider to implement the project. The client’s platform allows saving drone inspection data, including object layouts, photographed by drones. After the photos are saved, the user can highlight objects in them. For example, If the photos feature solar panels, the user has the possibility to mark those that are broken. It is possible to detect objects and defects automatically by means of the AI/ML/Computer Vision module. The platform also allows generating reports.

The project’s most significant highlight is the implementation of the AI and Machine Learning capabilities. Previously, all image processing and analysis from drone inspections were performed manually. Automating this process is bound to create billion-dollar savings to companies and become a true paradigm shift in the industry. Market research has shown that no similar solutions currently exist. It has also shown the demand for this kind of functionality is very high.

The scope of our services included researching and developing from scratch several versions of the solution with a view to finding out which of them fits the project’s goals best. We have been actively involved in discussing the product vision and shaping the product. Our team was also responsible for testing, research, and the development of the Artificial Intelligence/Machine Learning functionality.

Currently, the platform is focused on solar panels and power lines. Since the platform works with images, it is possible to cover any industry that can benefit from this. For example, it could be Healthcare, as the platform could be able to detect certain objects on x-ray shots, as well as learn and improve its performance based on datasets, or, the work safety industry, where AI can detect which of the construction workers are not wearing helmets on a construction site. The array of the interested industries is likely to grow in the future, as we already have working prototypes of the solutions for security cameras and waste sorting.

Solution

During the development process, the number of experts involved with the project varied in accordance with the project demands. The project has undergone multiple iterations. The most recent iteration started in October of 2020 and it focuses on the monitoring of the condition of solar panels and power lines.

We started with a project team composed of an AI/ML expert, a back-end developer, a front-end developer, a delivery manager, and a project manager (who has also of late been acting as the product manager).

As the project evolved, we, at first, added one more AI/ML expert, and then, several months ago, a data annotation expert. The data annotation expert works on our side, and is engaged in marking the objects by using the existing solution’s UX, thus helping optimize the data used in training the AI/ML models.

We opted for the Scrum framework that we use in most of our projects.

Our main contact on the client’s side has been the Business Owner, who has also been partially acting as a Product Manager. Additionally, we work closely with the client’s business development manager, UI/UX designer, and subject matter experts from various industries.

Technical Solution

The tech stack used by our team in developing the drone AI inspection software in the project under review includes:

  • Infrastructure: AWS (EC2, Batch, ECR, Cloud Watch, SES, Lambda), ELK stack, Grafana, Prometheus
  • Back-End: Spring, Hibernate, PostgreSQL, Docker, OpenDroneMaps
  • Front-End: Angular 11, TypeScript, Three.js, Konva.js, RxJS, Jasmine
  • ML tools: PyTorch, MMDetection, OpenCV

All decisions during the development process and the tech stack-related choices were made by us with an increase of the development speed or other future benefits in mind. In the later stages of the development process, we began to use AWS Lambda more extensively, as this technology is easy to scale, has free limits, and, thus, allows us to achieve monetary savings.

The biggest challenge for us and for our client was the fact that there was no similar solution on the market. We were breaking new ground and had to conduct a great deal of research and numerous experiments to achieve our goals and build the required solution from scratch.

Among the insights we got during the process is the need to develop complex functionality first. In regard of AI/ML projects, one should prioritize R&D, and then design the Web tool. During the evolution of this project and research of AI/ML components, our vision of the Web tool has changed multiple times, and we’ve had to modify the solution accordingly.

For example, we already had a working prototype from an early iteration, but had to rebuild everything in accordance with the wishes of our client. Our experts completely redesigned the part of the data structure responsible for storing annotation data to make it compatible with the COCO format. We’ve rebuilt the front-end part of the application, while adding hotkeys to improve the user experience. As a result, within the first few months, we developed an entirely new part of the system, where the interactions between the back-end, front-end, and AI were performed in accordance with the COCO standard and project-specific features.

Don't have time to read?

Book a free meeting with our experts to discover how we can help you.

Book a Meeting

From the technical standpoint, our project team has encountered several significant challenges. More specifically, they included adjusting the ML pipeline for datasets with different formats, including complex hierarchical datasets and creating 3D reconstruction from multiple images. Besides, it took us an additional effort to develop an algorithm that allows grouping images that contain the same object, for example, a building that appears on multiple images.

In developing the solution, our experts had to use a hierarchical dataset, which was dictated by the industry’s demands and created additional difficulties. Drone inspection of power lines, as one of the target activities, requires an increased accuracy so that the AI can detect a vast number of minor objects.

A hierarchical dataset means that each annotation can have parents and children. For instance, precise object detection in the Power Line industry requires that the AI system be able to distinguish between the pole and the cross arm from the background. It must also be able to identify the same object in different images.

To achieve this and determine whether a pole contains a cross arm and whether they are two different objects from the same hierarchy, our ML module must produce and consume data while taking into consideration the existing hierarchy.

Unfortunately, none of the existing frameworks or models could consume and produce annotation data in a hierarchical way. Because of this, our project team modified an existing framework for this purpose.

In the client’s solution l, users provide photos, captured by drones. The object of interest in these photos is shown from different perspectives. In order to achieve greater responsiveness, the client requested our team to enable users to get an orthophoto and a 3D model for their facilities.

With 3D reconstruction, it is quite easy to obtain an orthophoto, so we started by creating this feature. We did some research, discovered an open-source project with the required functionality, and successfully integrated our product with this functionality.

Lastly, we had to figure out how to deal with the image grouping challenge.

When one captures several objects from different perspectives, you receive location metadata (longitude, latitude, and altitude) and the camera pitch and direction. We decided to combine the image and camera metadata with geometry formulas and use the combination to precisely identify the objects the camera has captured.

After that, we conducted an investigation on clusterization algorithms to improve the accuracy of the previous solution. We expected the client to provide us with data in a format where all the images were of the same object and located in the same directory. Our team decided to derive information on objects and the related images from the directory structure, and achieved 100% accuracy here.

It is also important to mention that our team built the back-end of the solution from the ground up in the latest iteration of the project and performed the migration from Mongo to Postgres. The decision to perform the migration was made due to the product’s data model being very suitable for relative structure, and most of our developers being in favor of working with relative databases. After rebuilding the CI/CD and the deployment model, we managed to cut the infrastructure costs for the client by a factor of 2-3.

As of this moment, we are working on the new iteration of the AI pipeline, expanding the functionality of the platform by adding new types of models. We are also adding entirely new functionality, like real-time defect detection on security cameras. Finally, we are looking to integrate the solution with multiple IoT devices in the future and customize the platform for the varying needs of the client’s customers.

Result

SPD Technology’s project team successfully completed the MVP of the product with some additional features, and after that developed the entirety of the product. The platform represents a best-in-breed solution in its niche that also provides some completely innovative functionality, like, for instance, automatic image processing.

Presently, the client’s smart drone inspection platform is fully functional. It is facing excellent marketing prospects and drawing interest from businesses in different industries.

Ready to speed up your Software Development?

Explore the solutions we offer to see how we can assist you!

Schedule a Call