Artificial intelligence brought the promise of automating one of the most time-consuming and manual processes in digital asset management – asset tagging. With artificial intelligence, thousands of existing and new assets can be processed in a matter of hours, rather than weeks.
But, while AI is able to suggest relevant metadata, human verification is still required to evaluate the levels of accuracy and relevance to the business. As we implement AI into a DAM, we need to be careful to do so in a way that makes asset metadata more meaningful, not less, so that DAM administrators can make assets more easily discoverable.
Here are 6 best practices to help you achieve this and ensure your team successfully implements artificial intelligence in your DAM.
1. AI-Generated Metadata Should Be Kept Separate
When implementing AI in a digital asset management system, it’s important to keep your AI-generated metadata separate from your human-generated metadata.
When kept separate, the organization can toggle the availability of AI-generated metadata on or off and enable users to decide whether or not to use it in any given search. The primary purpose of this is to ensure that metadata derived from the AI service doesn’t corrupt the quality of existing metadata.
This is especially helpful for organizations piloting the use of a machine learning service, like Microsoft Cognitive Services or Amazon Rekognition. DAM administrators are able to see the quality of metadata being generated by the service before allowing users to include them in searches.
2. AI Providers Should be Tracked as a User
One of the primary benefits of a DAM is its ability to track actions taken by specific users, and AI should be no exception. Tracking your AI just like any other user allows actions performed to be more easily tracked and audited. This becomes even more important if you plan on using multiple AI services.
For example, if you’re using Microsoft Cognitive Services for some metadata and Google Vision for others, by creating a user account for each you can better audit the services, as you’re able to isolate the metadata that was added by each service.
3. AI Providers Should be Tracked by Feature
When implementing an AI service, it’s important to separate the services by project or feature to associate training data and test data with a particular attribute.
For example, if you create a test set for your AI service to identify phones, you would define a Cognitive Metadata Attribute called “Phones” to map to that corresponding AI project. You could also create a more general Cognitive Metadata Attribute called “Keywords” to associate with untrained auto-tagging features provided by the AI service.
Separating your services in this way allows you to transfer the data set to another service provider if you’re not satisfied with the results of a feature.
4. AI-Generated Metadata Should be Filterable
When using AI within a DAM, users need to be able to search for an asset based on AI-specific filters, such as AI provider, API/model version and prediction date. Depending on company policy, users should also be able to increase or decrease the acceptable confidence level for any given search.
For example, a user could search using the keyword “Dog” and then filter it to only show results that were tagged using IBM Watson and have a confidence level of 95% or higher.
5. AI-Generated Metadata Should be Convertible
Most digital asset management systems allow embedded metadata to be converted to a regular asset attribute (such as a keyword) and auto-tags should be no different. When implementing AI into your DAM, consider setting rules so that if a tag is generated with a high confidence level, it can automatically be converted into a general keyword (so it will still be available, even if AI services are toggled “off”).
For example, you can set up your AI with a set of rules so that any auto-tag with a confidence level of 99.9% or higher can be converted into a keyword.
6. AI Should Improve Over Time
One of the benefits of AI is its ability to progressively improve its performance on a specific task. As you get more data, you’re able to retrain the model and get better precision.
With this, any deleted/incorrect auto-tags can be treated as negative feedback and any confirmations of auto-tags into a keyword can be treated as positive feedback for the model. You can choose to retrain the model manually or it can be automatically triggered after a certain volume of asset data points has been collected by the DAM.
This capability, however, is only applicable when you’re using Guided or Specialized MLaaS, where you own the model and supply it with the “right information”.
Getting Started with AI and DAM
These are a few best practices when implementing artificial intelligence into your digital asset management system, but there are other important factors to consider. Download a Quick Guide to AI in DAM to learn the levels of customization available, important factors to consider and key steps to implementing AI in a DAM.
Ready to get started with artificial intelligence? Let’s talk about your unique use-case!