Dify is an easy-to-use LLMOps platform designed to empower more people to create sustainable, AI-native applications. With visual orchestration for various application types, Dify offers out-of-the-box, ready-to-use applications that can also serve as Backend-as-a-Service APIs. Unify your development process with one API for plugins and datasets integration, and streamline your operations using a single interface for prompt engineering, visual analytics, and continuous improvement.
Applications created with Dify include:
Out-of-the-box web sites supporting form mode and chat conversation mode A single API encompassing plugin capabilities, context enhancement, and more, saving you backend coding effort Visual data analysis, log review, and annotation for applications Dify is compatible with Langchain, meaning we'll gradually support multiple LLMs, currently supported:
Before installing Dify, make sure your machine meets the following minimum system requirements:
The easiest way to start the Dify server is to run our docker-compose.yml file. Before running the installation command, make sure that Docker and Docker Compose are installed on your machine:
cd docker
docker-compose up -d
After running, you can access the Dify dashboard in your browser at http://localhost/install and start the initialization installation process.
If you need to customize the configuration, please refer to the comments in our docker-compose.yml file and manually set the environment configuration. After making the changes, please run 'docker-compose up -d' again.
Features under development:
Q: What can I do with Dify?
A: Dify is a simple yet powerful LLM development and operations tool. You can use it to build commercial-grade applications, personal assistants. If you want to develop your own applications, LangDifyGenius can save you backend work in integrating with OpenAI and offer visual operations capabilities, allowing you to continuously improve and train your GPT model.
Q: How do I use Dify to "train" my own model?
A: A valuable application consists of Prompt Engineering, context enhancement, and Fine-tuning. We've created a hybrid programming approach combining Prompts with programming languages (similar to a template engine), making it easy to accomplish long-text embedding or capturing subtitles from a user-input Youtube video - all of which will be submitted as context for LLMs to process. We place great emphasis on application operability, with data generated by users during App usage available for analysis, annotation, and continuous training. Without the right tools, these steps can be time-consuming.
Q: What do I need to prepare if I want to create my own application?
A: We assume you already have an OpenAI API Key; if not, please register for one. If you already have some content that can serve as training context, that's great!
Q: What interface languages are available?
A: English and Chinese are currently supported, and you can contribute language packs to us.
扫码关注腾讯云开发者
领取腾讯云代金券
Copyright © 2013 - 2025 Tencent Cloud. All Rights Reserved. 腾讯云 版权所有
深圳市腾讯计算机系统有限公司 ICP备案/许可证号:粤B2-20090059 深公网安备号 44030502008569
腾讯云计算(北京)有限责任公司 京ICP证150476号 | 京ICP备11018762号 | 京公网安备号11010802020287
Copyright © 2013 - 2025 Tencent Cloud.
All Rights Reserved. 腾讯云 版权所有