Reduced fixed versions with large languages of languages and good-focus

Machine interpretation (MT) is found in the paradigm shift, through the programs based on large languages of large languages (llm) increases in the native models of Encoder-Decoder trained for translation information. However, systems are based on the high risk of producing halucinations, can reduce the trust and security of the user. The study that has a highlighted study by reducing the HALLUCTION MTorical models, with solutions that include post-HOC reduction – to receive fixed versions – to find fixed versions and recruit. While working successfully, this method introduces additional difficulties to capture additional tools in production and increase the latency. In order to deal with these restrictions, we raise the learning process in reduction in reduction in halucinations during a model training phase. Specially, we present data creativity to produce chalksinations focusing on the focus of datasets. The well-prepared llMs reduces the average understanding of 96% across five languages, while maintaining the full quality of translation. In the case of zero-shot our way reduces halublations at 89% on the average of three undisclosed languages.