首页
学习
活动
专区
工具
TVP
发布
精选内容/技术社群/优惠产品,尽在小程序
立即前往
  • 您找到你想要的搜索结果了吗?
    是的
    没有找到

    企业应用安全体系建设思考

    应用安全的保护对象是应用系统,应用系统有各种表现形式,如Web、APP、小程序、客户端、系统等。应用安全就是围绕着这些应用的安全建设工作。通常状态下,特别是C端的应用,公司业务跟应用都呈现了非常大的绑定关系。应用能够直接影响公司业务,所以应用的安全性,也越来越受到企业重视,其重要程度也往往排在安全方面的首要位置。随着互联网产业对人民的影响,国家层次上的好多基础服务业务也有了互联网应用。因为黑灰产的危害,导致着国家立法去应对互联网犯罪行为,立法又导致着企业进行应用的安全建设。比如《网络安全法》规定了等级保护制度,国内所有系统都必须做等保。

    02

    Import Kafka data into OSS using E-MapReduce service

    Overview Kafka is a frequently-used message queue in open-source communities. Although Kafka (Confluent) officially provides plug-ins to import data directly from Kafka to HDFS's connector, Alibaba Cloud provides no official support for the file storage system OSS. This article will give a simple example to implement data writes from Kafka to Alibaba Cloud OSS. Because Alibaba Cloud E-MapReduce service integrates a large number of open-source components and docking tools for Alibaba Cloud, in this article, the example is directly run in the E-MapReduce cluster. This example uses the open-source Flume tool as a transit to connect Kafka and OSS. Flume open-source components may also appear on the E-MapReduce platform in the future. Scenario example Next we will name a simple example. If you already have an online Kafka cluster, you can directly jump to Step 4. 1. In the Kafka Home directory, start the Kafka service process. Configure the Zookeeper address in the configuration file to the service address emr-header-1:2181 bin/kafka-server-start.sh config/server.properties 2. Create a Kafka topic with a name of test bin/kafka-topics.sh --create --zookeeper emr-header-1:2181 \ --replication-factor 1 --partitions 1 --topic test 3. Write data to Kafka test topic and the data content is the performance monitoring data of the local machine vmstat 1 | bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test 4. Configure and start the Flume service in the Flume Home directory Create a new configuration file: conf/kafka-example.conf. In specific, specify the source as the corresponding topic for Kafka, and use sink as the HDFS Sinker. Specify the path as the OSS path. Because the E-MapReduce service implements an efficient OSS FileSystem (compatible with Hadoop FileSystem) for us, the OSS path can be specified directly, and the HDFS Sinker data will be automatically written to OSS. # Name the components on this agent a1.sources = source1 a1.sinks = oss1 a1.channels = c1 # Describe/configure

    03
    领券