Reactive Machines

Amazon Bedrock Guardrails Photos to filter security that leads to industry-leading content, helping customers block until 88% of the risky multimodal content: available today

Amazon Bedrock Guardrails announces standardized image filtering, to make you know both these pictures and text content in your Appropriate AI system. Earlier it is limited to the tension of the text only, this development is now providing complete measurements in both modelities. This new power removes heavy enhancement needed to build your own picture or use cycles in comparison to text content that can be accurate and tiring.

Tero Hottinen, VP, the head of a court partner, reflected the next case to use:

“In the ongoing test, the Kone recognizes the power of Amazon Bedrock GuardraAs as an important factor in protecting AI products, especially the Guardrals that include an important role in finding the accurate diagnosis and analysis of multimodal content.

Amazon Bedrock Guardrails provides configurable protection to help customers prevent harmful or outgoing of their productive AI applications. Customers can create custom Guardrals that are accompanied by their specific charges by using various policies to find out and unwanted or unwanted. In addition, customers can use Guardrails to find halucinals for models and help and make the answers support and accurate. Using the Standalone Application Rail, Guardrails empowering customers to use consistent policies in any basic model, including those in the Amazon Bedrock, tasked models, and side models, and side models, and side models. The Bedrock Goodrals supports the seamless combination of the Bedrock agents and beeds of betrock information, which gives enhancements to emphasize Safefeles in all travel work, such as the Eventic programs.

Amazon Bedrock Guardrails offers six different policies, including the content filters to receive and sort out harmful objects in several categories, including hate, sexual content, rupture, rapid attacks; Forms of topics to limit certain subjects; Sorting of delicate information to prevent identifiable information (PII); word filters to block some words; Basic Assessment Content to find Hall Natutions and analyze response response; As well as the default consulting checks (right now on the preview view) to identify, direct, and explain true claims. With new functionality of the content of content, these protections are now expanded in both text and pictures, help customer protection until 88% of the dangerous multimodal harm. You can be sure of photography or textbook content.

This new power is usually found in the US East (N. Virginia, in the US West (Europe (Frankfurt), and AWS AWS.

In this post, we discuss how we can start with photos of the pictures in Amazon Bedrock Guardrails.

Looking for everything

To get started, create a guardrail in AWS Management Console and prepare for content filters in any text or photo data or both. You can also use AWS SDK to combine this power in your applications.

Create Guardrail

Creating a Guardrail, Fill in the following steps:

  1. At Amazon Bedrock Console, Under Protection To the Shipping Panel, select Watch.
  2. Designate Create Guardrail.
  3. In Prepare files of content Category, Underly Dangerous Categories including Quick attacksYou can use existing files to find and block picture data above the data documents.
  4. After selecting and preparing for the content filters you want to use, you can save the Guardrail and begin to use it to prevent harmful installation or outgoing of your unwanted AI.

Check Guardrail with the General Generation

To explore new Guardrail in Amazon Bedrock Console, select Guarterail and select Try. You have two options: Check the GuarderAir by selecting and to provide a model or check the Guardrail without installing the model using the Amazon Bedrock Guardrails Independent ApplyGuardail API.

By ApplyGuardrail API, you can confirm the content of any point in the transit on your application to travel before processing or serving the user results. You can also use API to check the installation and the Self-managed (customary effects) or third party FMS, regardless of infrastructure. For example, you can use the API to check the Meta Llama model 3.2 held in Amazon Sagemaker or negative model running on your laptop.

Check Guardrail by selecting and enlightening the model

Select a modeling model of photos or effect, for example, Anthropic's Claude 3.5 Sonnet. Make sure the immediate and response filters are enabled for the content of the image. After that, immediately give a picture file, and select Run.

In this example, Goodray in Amazon Bedrock intervenes. Designate See the trail For more information.

Guardra Trace provides a record of the safety method used during the contact. It shows that the Guardrails in Amazon Bedrock intervenes or no and what tests are made of this entry (faster) and output (model response). In this example, the content filters are immediately restricted from installing because they receive violence in the picture with central trust.

Check Guardrail without installing model

In Amazon Bedrock Console, Select Use Acercuidwail APIAPI representative to check the Guarderail without asking the model. Select that you want to confirm immediately or example of the model produced. Then, repeat steps from the previous section. Make sure the instant and response filters are enabled for photo content, offer content to ensure, and then select Run.

As a result, we worked together with the same picture and fast installation, and at Bedrock Bedrock Bedrock entered. Designate See the trail and for more information.

Check the Guardrail with the generation of the image

Now, let's examine the Amazon Bedrock Guardrails Precamed Detexity Detection with generation generation. The following is an example of using Amazon Bedrock Guardrails Fulfillers of photographic content for a generation of a generation of generation generation. We produce a photo using the stiff model in Amazon Bedrock using InvokeModel API and Guardrail:

guardrailIdentifier = <>
guardrailVersion ="1"

model_id = 'stability.sd3-5-large-v1:0'
output_images_folder="images/output"

body = json.dumps(
    {
        "prompt": "A Gun", #  for image generation ("A gun" should get blocked by violence)
        "output_format": "jpeg"
    }
)

bedrock_runtime = boto3.client("bedrock-runtime", region_name=region)
try:
    print("Making a call to InvokeModel API for model: {}".format(model_id))
    response = bedrock_runtime.invoke_model(
        body=body,
        modelId=model_id,
        trace="ENABLED",
        guardrailIdentifier=guardrailIdentifier,
        guardrailVersion=guardrailVersion
    )
    response_body = json.loads(response.get('body').read())
    print("Received response from InvokeModel API (Request Id: {})".format(response['ResponseMetadata']['RequestId']))
    if 'images' in response_body and len(response_body['images']) > 0:
        os.makedirs(output_images_folder, exist_ok=True)
        images = response_body["images"]
        for image in images:
            image_id = ''.join(random.choices(string.ascii_lowercase + string.digits, k=6))
            image_file = os.path.join(output_images_folder, "generated-image-{}.jpg".format(image_id))
            print("Saving generated image {} at {}".format(image_id, image_file))
            with open(image_file, 'wb') as image_file_descriptor:
                image_file_descriptor.write(base64.b64decode(image.encode('utf-8')))
    else:
        print("No images generated from model")
    guardrail_trace = response_body['amazon-bedrock-trace']['guardrail']
    guardrail_trace['modelOutput'] = ['']
    print(guardrail_trace['outputs'])
    print("nGuardrail Trace: {}".format(json.dumps(guardrail_trace, indent=2)))
except botocore.exceptions.ClientError as err:
    print("Failed while calling InvokeModel API with RequestId = {}".format(err.response['ResponseMetadata']['RequestId']))
    raise err

You can enter the perfect example from GitHub Repo.

Store

In this post, we checked that the new Amazon Bedrock Guardrails filters provide full skills to measure multimodal content. By expanding across the text-only filtering – this solution now helps up to 88% of the customer or unwanted content. Guardrals will assist organizations health care, production, financial, media, media, and education that improves product safety without a customary load or conducting a mistake.

To learn more, see Set dangerous content in models using Amazon Bedrock GuairderAILS.


About the authors

Satveer Khurpa SR. WW Specialist Specialist Solution Architect, Amazon Bedrock in Amazon Web Services, Amazon Bedrock Security technology. In this passage, he uses his technology in the diagnosis of cloud to improve new AI sensors for AI of AI customers in all the various industry. The deep understanding of AD technologies of AI and security allowed him to narrowly narrow, safe, and responsible for new business possibilities and continue the existing security.

Shyam Srinivasan It is in the product of Amazon Bedrock Guardrails. He cares to make the world a better place for technology and wants to be part of the trip. In his spare time, Shyam likes to run long distances, travel around the world, and find new traditions and family and friends.

Antonio Rodriguez Is the construction of the best AI at AW. It helps companies all sizes to resolve their challenges, accept new items, and create new business possibilities for Amazon Bedrock. Without work, he likes to spend time with his family and played sports with his friends.

Dr. Andrew Kane Is the AWS Troostelkwe WW Tech Early (AI services of AI) based on London. You focus on AWS language AI and Vision AI, which helps our customers build a lot of AI for one trial solution. Before joining AWS in the early 2015, Andrew spent two decades work in signal fields, financial management systems, lawlessness, and publishing plans and publishing programs. He is a deep karate lover (just one belt away from Black Belt) and also Avid Home-Brewer, using default hardware and other iot nerves.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button