Loading [MathJax]/jax/output/CommonHTML/config.js
前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >专栏 >k8s安装spark

k8s安装spark

作者头像
summerking
发布于 2022-10-27 05:44:30
发布于 2022-10-27 05:44:30
1.2K00
代码可运行
举报
文章被收录于专栏:summerking的专栏summerking的专栏
运行总次数:0
代码可运行

这段时间已经基本实现了产品应用层从原生的springboot微服务架构迁移到k8s上,过程可谓是瞎子过河一步一个坑,但是好在系统总体能跑起来了;今天研究了下产品计算层(spark集群)如何基于k8s部署操作,过程有些取巧了,但总的来说有些进展。 本次部署spark on k8s集群,基于kubeapps,简单便捷且一步到胃:

提示

Client启动一个 pod 运行Spark Driver Spark Driver中运行main函数,并创建SparkSession,后者使用KubernetesClusterManager作为SchedulerBackend,启动Kubernetes pod,创建Executor。 每个Kubernetes pod创建Executor,并执行应用程序代码 运行完程序代码,Spark Driver 清理 Executor 所在的 pod,并保持为“Complete”状态

# 1.安装kubeapps

看这里哟

# 2.选择spark版本

# 3.yml配置

点击查看

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
#   imageRegistry: myRegistryName
#   imagePullSecrets:
#     - myRegistryKeySecretName

## Bitnami Spark image version
## ref: https://hub.docker.com/r/bitnami/spark/tags/
##
image:
  registry: docker.io
  repository: bitnami/spark
  tag: 2.4.3-debian-9-r78
  ## Specify a imagePullPolicy
  ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: IfNotPresent
  ## Pull secret for this image
  # pullSecrets:
  #   - myRegistryKeySecretName

## String to partially override spark.fullname template (will maintain the release name)
##
# nameOverride:
## String to fully override spark.fullname template
##
# fullnameOverride:
## Spark Components configuration
##
master:
  ## Spark master specific configuration
  ## Set a custom configuration by using an existing configMap with the configuration file.
  # configurationConfigMap:
  webPort: 8080
  clusterPort: 7077

  ## Set the master daemon memory limit.
  # daemonMemoryLimit:
  ## Use a string to set the config options for in the form "-Dx=y"
  # configOptions:
  ## Set to true if you would like to see extra information on logs
  ## It turns BASH and NAMI debugging in minideb
  ## ref:  https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
  debug: false

  ## An array to add extra env vars
  ## For example:
  ## extraEnvVars:
  ##  - name: SPARK_DAEMON_JAVA_OPTS
  ##    value: -Dx=y
  # extraEnvVars:
  ## Kubernetes Security Context
  ## https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
  ##
  securityContext:
    enabled: true
    fsGroup: 1001
    runAsUser: 1001
  ## Node labels for pod assignment
  ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector: {}
  ## Tolerations for pod assignment
  ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations: []
  ## Affinity for pod assignment
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ##
  affinity: {}
  ## Configure resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources: 
  #  limits:
  #    cpu: 200m
  #    memory: 1Gi
  #  requests:
  #    memory: 256Mi
  #    cpu: 250m
  ## Configure extra options for liveness and readiness probes
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
  livenessProbe:
    enabled: true
    initialDelaySeconds: 180
    periodSeconds: 20
    timeoutSeconds: 5
    failureThreshold: 6
    successThreshold: 1

  readinessProbe:
    enabled: true
    initialDelaySeconds: 30
    periodSeconds: 10
    timeoutSeconds: 5
    failureThreshold: 6
    successThreshold: 1

worker:
  ## Spark worker specific configuration
  ## Set a custom configuration by using an existing configMap with the configuration file.
  # configurationConfigMap:
  webPort: 8081
  ## Set to true to use a custom cluster port instead of a random port.
  # clusterPort:
  ## Set the daemonMemoryLimit as the daemon max memory
  # daemonMemoryLimit:
  ## Set the worker memory limit
  # memoryLimit:
  ## Set the maximun number of cores
  # coreLimit:
  ## Working directory for the application
  # dir:
  ## Options for the JVM as "-Dx=y"
  # javaOptions:
  ## Configuraion options in the form "-Dx=y"
  # configOptions:
  ## Number of spark workers (will be the min number when autoscaling is enabled)
  replicaCount: 3

  autoscaling:
    ## Enable replica autoscaling depending on CPU
    enabled: false
    CpuTargetPercentage: 50
    ## Max number of workers when using autoscaling
    replicasMax: 5

  ## Set to true if you would like to see extra information on logs
  ## It turns BASH and NAMI debugging in minideb
  ## ref:  https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
  debug: false

  ## An array to add extra env vars
  ## For example:
  ## extraEnvVars:
  ##  - name: SPARK_DAEMON_JAVA_OPTS
  ##    value: -Dx=y
  # extraEnvVars:
  ## Kubernetes Security Context
  ## https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
  ##
  securityContext:
    enabled: true
    fsGroup: 1001
    runAsUser: 1001
  ## Node labels for pod assignment
  ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector: {}
  ## Tolerations for pod assignment
  ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations: []
  ## Affinity for pod assignment
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ##
  affinity: {}
  ## Configure resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources: 
  #  limits:
  #    cpu: 200m
  #    memory: 1Gi
  #  requests:
  #    memory: 256Mi
  #    cpu: 250m
  ## Configure extra options for liveness and readiness probes
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
  livenessProbe:
    enabled: true
    initialDelaySeconds: 180
    periodSeconds: 20
    timeoutSeconds: 5
    failureThreshold: 6
    successThreshold: 1

  readinessProbe:
    enabled: true
    initialDelaySeconds: 30
    periodSeconds: 10
    timeoutSeconds: 5
    failureThreshold: 6
    successThreshold: 1

## Security configuration
security:
  ## Name of the secret that contains all the passwords. This is optional, by default random passwords are generated.
  # passwordsSecretName:
  ## RPC configuration
  rpc:
    authenticationEnabled: false
    encryptionEnabled: false

  ## Enables local storage encryption
  storageEncryptionEnabled: false

  ## SSL configuration
  ssl:
    enabled: false
    needClientAuth: false
    protocol: TLSv1.2
  ## Name of the secret that contains the certificates
  ## It should contains two keys called "spark-keystore.jks" and "spark-truststore.jks" with the files in JKS format.
  # certificatesSecretName:
## Service to access the master from the workers and to the WebUI
##
service:
  type: NodePort
  clusterPort: 7077
  webPort: 80
  ## Specify the NodePort value for the LoadBalancer and NodePort service types.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
  ##
  # nodePort:
  ## Use loadBalancerIP to request a specific static IP,
  # loadBalancerIP:
  ## Service annotations done as key:value pairs
  annotations: 

## Ingress controller to access the web UI.
ingress:
  enabled: false

  ## Set this to true in order to add the corresponding annotations for cert-manager
  certManager: false

  ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
  annotations: 

  ## The list of hostnames to be covered with this ingress record.
  ## Most likely this will be just one host, but in the event more hosts are needed, this is an array
  hosts:
    - name: spark.local
      path: /

# 4.执行后耐心等待即可

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@master ~]# kubectl get pod -n kspark
NAME                             READY   STATUS    RESTARTS   AGE
sulky-selection-spark-master-0   1/1     Running   0          22h
sulky-selection-spark-worker-0   1/1     Running   0          22h
sulky-selection-spark-worker-1   1/1     Running   0          22h
sulky-selection-spark-worker-2   1/1     Running   0          22h

# 5.验证

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
1. Get the Spark master WebUI URL by running these commands:

  export NODE_PORT=$(kubectl get --namespace kspark -o jsonpath="{.spec.ports[?(@.name=='http')].nodePort}" services sulky-selection-spark-master-svc)
  export NODE_IP=$(kubectl get nodes --namespace kspark -o jsonpath="{.items[0].status.addresses[0].address}")
  echo http://$NODE_IP:$NODE_PORT

2. Submit an application to the cluster:

  To submit an application to the cluster the spark-submit script must be used. That script can be
  obtained at https://github.com/apache/spark/tree/master/bin. Also you can use kubectl run.

  Run the commands below to obtain the master IP and submit your application.

  export EXAMPLE_JAR=$(kubectl exec -ti --namespace kspark sulky-selection-spark-worker-0 -- find examples/jars/ -name 'spark-example*\.jar' | tr -d '\r')
  export SUBMIT_PORT=$(kubectl get --namespace kspark -o jsonpath="{.spec.ports[?(@.name=='cluster')].nodePort}" services sulky-selection-spark-master-svc)
  export SUBMIT_IP=$(kubectl get nodes --namespace kspark -o jsonpath="{.items[0].status.addresses[0].address}")

  kubectl run --namespace kspark sulky-selection-spark-client --rm --tty -i --restart='Never' \
    --image docker.io/bitnami/spark:2.4.3-debian-9-r78 \
    -- spark-submit --master spark://$SUBMIT_IP:$SUBMIT_PORT \
    --class org.apache.spark.examples.SparkPi \
    --deploy-mode cluster \
    $EXAMPLE_JAR 1000

** IMPORTANT: When submit an application the --master parameter should be set to the service IP, if not, the application will not resolve the master. **

** Please be patient while the chart is being deployed **
  1. 访问NodePort

这里可以看到NodePort指向的是30423

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@master ~]# kubectl get svc --namespace kspark
NAME                               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                       AGE
sulky-selection-spark-headless     ClusterIP   None             <none>        <none>                        22h
sulky-selection-spark-master-svc   NodePort    10.107.246.253   <none>        7077:30028/TCP,80:30423/TCP   22h
  1. 进入master启动spark shell
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@master home]# kubectl exec -ti sulky-selection-spark-master-0 -n kspark /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
$ ls
LICENSE  NOTICE  R  README.md  RELEASE	bin  conf  data  examples  jars  kubernetes  licenses  logs  python  sbin  tmp	work  yarn
$ ls
LICENSE  NOTICE  R  README.md  RELEASE	bin  conf  data  examples  jars  kubernetes  licenses  logs  python  sbin  tmp	work  yarn
$ cd bin
$ ls
beeline		      find-spark-home	   load-spark-env.sh  pyspark2.cmd     spark-class	 spark-shell	   spark-sql	   spark-submit       sparkR
beeline.cmd	      find-spark-home.cmd  pyspark	      run-example      spark-class.cmd	 spark-shell.cmd   spark-sql.cmd   spark-submit.cmd   sparkR.cmd
docker-image-tool.sh  load-spark-env.cmd   pyspark.cmd	      run-example.cmd  spark-class2.cmd  spark-shell2.cmd  spark-sql2.cmd  spark-submit2.cmd  sparkR2.cmd
$ cd ../sbin
$ ls
slaves.sh	  start-all.sh		     start-mesos-shuffle-service.sh  start-thriftserver.sh   stop-mesos-dispatcher.sh	    stop-slaves.sh
spark-config.sh   start-history-server.sh    start-shuffle-service.sh	     stop-all.sh	     stop-mesos-shuffle-service.sh  stop-thriftserver.sh
spark-daemon.sh   start-master.sh	     start-slave.sh		     stop-history-server.sh  stop-shuffle-service.sh
spark-daemons.sh  start-mesos-dispatcher.sh  start-slaves.sh		     stop-master.sh	     stop-slave.sh
$ pwd
/opt/bitnami/spark/sbin 
$ ./spark-shell --master spark://sturdy-cars-spark-master-0.sturdy-cars-spark-headless.kspark.svc.cluster.local:7077
20/12/29 08:11:21 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://sturdy-cars-spark-master-0.sturdy-cars-spark-headless.kspark.svc.cluster.local:4040
Spark context available as 'sc' (master = spark://sturdy-cars-spark-master-0.sturdy-cars-spark-headless.kspark.svc.cluster.local:7077, app id = app-20201229081130-0000).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.4.3
      /_/
         
Using Scala version 2.11.12 (OpenJDK 64-Bit Server VM, Java 1.8.0_222)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 
  1. 测试提交jar到spark
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@master]# export EXAMPLE_JAR=$(kubectl exec -ti --namespace kspark sulky-selection-spark-worker-0 -- find examples/jars/ -name 'spark-example*\.jar' | tr -d '\r')
[root@master]# export SUBMIT_PORT=$(kubectl get --namespace kspark -o jsonpath="{.spec.ports[?(@.name=='cluster')].nodePort}" services sulky-selection-spark-master-svc)
[root@master]# export SUBMIT_IP=$(kubectl get nodes --namespace kspark -o jsonpath="{.items[0].status.addresses[0].address}")
[root@master]# kubectl run --namespace kspark sulky-selection-spark-client --rm --tty -i --restart='Never' \
>     --image docker.io/bitnami/spark:2.4.3-debian-9-r78 \
>     -- spark-submit --master spark://$SUBMIT_IP:$SUBMIT_PORT \
>     --class org.apache.spark.examples.SparkPi \
>     --deploy-mode cluster \
>     $EXAMPLE_JAR 1000
If you don't see a command prompt, try pressing enter.
log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.NativeCodeLoader).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
21/01/28 01:34:21 INFO SecurityManager: Changing view acls to: spark
21/01/28 01:34:21 INFO SecurityManager: Changing modify acls to: spark
21/01/28 01:34:21 INFO SecurityManager: Changing view acls groups to: 
21/01/28 01:34:21 INFO SecurityManager: Changing modify acls groups to: 
21/01/28 01:34:21 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(spark); groups with view permissions: Set(); users  with modify permissions: Set(spark); groups with modify permissions: Set()
21/01/28 01:34:22 INFO Utils: Successfully started service 'driverClient' on port 44922.
21/01/28 01:34:22 INFO TransportClientFactory: Successfully created connection to /192.168.0.177:30028 after 58 ms (0 ms spent in bootstraps)
21/01/28 01:34:22 INFO ClientEndpoint: Driver successfully submitted as driver-20210128013422-0000
21/01/28 01:34:22 INFO ClientEndpoint: ... waiting before polling master for driver state
21/01/28 01:34:27 INFO ClientEndpoint: ... polling master for driver state
21/01/28 01:34:27 INFO ClientEndpoint: State of driver-20210128013422-0000 is RUNNING
21/01/28 01:34:27 INFO ClientEndpoint: Driver running on 100.67.224.69:42072 (worker-20210127034500-100.67.224.69-42072)
21/01/28 01:34:27 INFO ShutdownHookManager: Shutdown hook called
21/01/28 01:34:27 INFO ShutdownHookManager: Deleting directory /tmp/spark-7667114a-6d54-48a9-83b7-174cabce632a
pod "sulky-selection-spark-client" deleted
[root@master]# 

Client启动一个名为sulky-selection-spark-client的 pod 运行Spark Driver Spark Driver中运行SparkPi的main函数,并创建SparkSession,后者使用KubernetesClusterManager作为SchedulerBackend,启动Kubernetes pod,创建Executor。 每个Kubernetes pod创建Executor,并执行应用程序代码 运行完程序代码,Spark Driver 清理 Executor 所在的 pod,并保持为“Complete”状态

  1. web-UI查看
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@master ~]# kubectl get pod -n kspark
NAME                             READY   STATUS    RESTARTS   AGE
sulky-selection-spark-master-0   1/1     Running   0          22h
sulky-selection-spark-worker-0   1/1     Running   0          22h
sulky-selection-spark-worker-1   1/1     Running   0          22h
sulky-selection-spark-worker-2   1/1     Running   0          22h
[root@master ~]# kubectl get pod -n kspark
NAME                             READY   STATUS    RESTARTS   AGE
sulky-selection-spark-client     1/1     Running   0          8s
sulky-selection-spark-master-0   1/1     Running   0          22h
sulky-selection-spark-worker-0   1/1     Running   0          22h
sulky-selection-spark-worker-1   1/1     Running   0          22h
sulky-selection-spark-worker-2   1/1     Running   0          22h
[root@master ~]# kubectl get pod -n kspark
NAME                             READY   STATUS    RESTARTS   AGE
sulky-selection-spark-master-0   1/1     Running   0          22h
sulky-selection-spark-worker-0   1/1     Running   0          22h
sulky-selection-spark-worker-1   1/1     Running   0          22h
sulky-selection-spark-worker-2   1/1     Running   0          22h
本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2020-12-29,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
暂无评论
推荐阅读
编辑精选文章
换一批
k8s安装redis主从版
在 K8s 中,使用 helm 部署 redis 主从模式,一主可以有多从,可指定任意数量的从节点,扩容缩容都很方便。
summerking
2022/10/27
1.7K0
kubeadm方式部署k8s集群
kubectl:通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
Xiongan-桃子
2023/06/10
5000
kubeadm方式部署k8s集群
k8s实践(六):Pod资源管理
  在配置Pod时,我们可以为其中的每个容器指定需要使用的计算资源(CPU和内存)。计算资源的配置项分为两种:Requests和Limits。Requests表示容器希望被分配到的、可完全保证的资源量(资源请求量);Limits是容器最多能使用的资源量的上限(资源限制量)。
loong576
2019/09/10
2K0
k8s实践(六):Pod资源管理
CentOS7.7部署k8s(1 master + 2 node)
VMware创建三个vm,规格2cpu 4G mem 200G disk,一个NAT网卡
后端云
2020/04/22
1.4K0
CentOS7.7部署k8s(1 master + 2 node)
k8s安装kubeapps
Kubeapps是Bitnami公司的一个项目,其目的是为Kubernetes的使用者们提供已经打包好的应用仪表盘,它拥有网页界面可以更方便的部署和管理 k8s 原生应用。
summerking
2022/10/27
5440
k8s安装kubeapps
使用 VictoriaMetrics 监控 K8s 集群
过去几年,Kubernetes 已经成为容器编排的标准,越来越多的公司开始在生产系统使用 Kubernetes。通常我们使用 Prometheus 对 K8S 集群进行监控,但由于 Prometheus 自身单点的问题。不得不寻求一些联邦方案或者分布式高可用方案,社区热度比较高的项目有 Thanos,Cortex,VictoriaMetrics。本文就介绍使用 VictoriaMetrics 作为数据存储后端对 K8S 集群进行监控,k8s 部署不再具体描述。
米开朗基杨
2021/06/09
3.2K0
使用 VictoriaMetrics 监控 K8s 集群
k8s群集的三种Web-UI界面部署
//这里使用的dashboard版本较高,相较于之前的版本访问必须使用火狐浏览器,这里不需要。
小手冰凉
2020/09/15
4.2K0
k8s群集的三种Web-UI界面部署
4. 死磕 k8s系列之安装包管理工具(Helm)
Helm可以看作是k8s集群的包管理工具,通过Helm可以快速安装很多软件,比如mysql,nginx等,当然,也可以把自己的应用交给Helm来管理和安装。
彤哥
2020/02/10
3.4K0
k8s系列(9)-容忍、污点、亲和
下面是一个简单的示例:在 node1 上加一个 Taint,该 Taint 的键为 key,值为 value,Taint 的效果是 NoSchedule。这意味着除非 pod 明确声明可以容忍这个 Taint,否则就不会被调度到 node1 上
爽朗地狮子
2022/10/20
8793
K8S基础搭建使用
由上可见,需要本地镜像仓库需要 pod-infrastructure:latest 这个 pod 基础镜像,所以需要在拉取镜像 docker pull tianyebj/pod-infrastructure,并且 push 到本地镜像仓库
cuijianzhe
2022/06/14
5620
K8S基础搭建使用
k8s实践(十五):Centos7.6部署k8s v1.16.4高可用集群(主备模式)
本文采用kubeadm方式搭建高可用k8s集群,k8s集群的高可用实际是k8s各核心组件的高可用,这里使用主备模式,架构如下:
loong576
2020/01/14
1.9K0
k8s实践(十五):Centos7.6部署k8s v1.16.4高可用集群(主备模式)
k8s实践(八):ConfigMap and Secret
  在实际的应用部署中, 经常需要为各种应用/中间件配置各种参数, 如数据库地址、 用户名、 密码等, 而且大多数生产环境中的应用程序配置较为复杂, 可能是多个 Config 文件、 命令行参数和环境变量的组合。 要完成这样的任务有很多种方案, 比如:
loong576
2019/09/18
2K0
k8s实践(八):ConfigMap and Secret
k8s基础知识_lable
本章节将介绍如何在kubernetes集群中部署一个nginx服务,并且能够对其进行访问。
全栈程序员站长
2022/09/22
4620
k8s基础知识_lable
14 张图详解 Zookeeper + Kafka on K8S 环境部署
一、概述 Apache ZooKeeper 是一个集中式服务,用于维护配置信息、命名、提供分布式同步和提供组服务,ZooKeeper 致力于开发和维护一个开源服务器,以实现高度可靠的分布式协调,其实也可以认为就是一个分布式数据库,只是结构比较特殊,是树状结构。官网文档:https://zookeeper.apache.org/doc/r3.8.0/ Kafka是最初由 Linkedin 公司开发,是一个分布式、支持分区的(partition)、多副本的(replica),基于 zookeeper 协调的
我的小碗汤
2023/03/19
1.9K0
14 张图详解 Zookeeper + Kafka on K8S 环境部署
k8s系列(5)-Configmap和Secret
完整系列k8s系列(1)-腾讯云CVM手动部署K8S_Dashboard安装1k8s系列(1)-腾讯云CVM手动部署K8S_Dashboard安装2k8s系列(2)-Servicek8s系列(3)-StatefulSet的MongoDB实战k8s系列(4)-MongoDB数据持久化k8s系列(5)-Configmap和Secretk8s系列(6)-Helmk8s系列(7)-命名空间k8s系列(8)-Ingressk8s系列(9)-容忍、污点、亲和一. configmap访问时,如果直接使用 Service
爽朗地狮子
2022/10/20
4660
k8s安装es集群版
这里使用官方示例 Elasticsearch StatefulSet 将使用 EmptyDir 卷来存储数据。pod 终止时,EmptyDir 将被擦除,后续将存储更改为永久卷声明即可
summerking
2022/09/19
4530
k8s安装es集群版
K8s 上的分布式存储集群搭建(Rook/ceph)
修改Rook CSI镜像地址,原本的地址可能是gcr的镜像,但是gcr的镜像无法被国内访问,所以需要同步gcr的镜像到阿里云镜像仓库,本文档已经为大家完成同步,可以直接修改如下:
灵雀云
2021/12/10
4K0
K8s 上的分布式存储集群搭建(Rook/ceph)
k8s安装zuul
Zuul最主要的功能两个功能是:对请求的路由和过滤: 其中路由功能负责将外部请求转发到具体的微服务实例上,是实现外部访问统一入口的基础 而过滤器功能则负责对请求的处理过程进行干预,是实现请求校验、服务聚合等功能的基础 # 1.基于DockerFile打包镜像 Dockerfile FROM adoptopenjdk/openjdk8:latest ADD app /app WORKDIR /app ENV botp="" EXPOSE 28601 ENTRYPOINT ["sh","-c","
summerking
2022/09/16
3240
k8s安装zuul
K8S二进制部署过程-v1.17.0
到 https://github.com/coreos/etcd/releases 页面下载最新版本的发布包:
cuijianzhe
2022/06/14
7490
K8S二进制部署过程-v1.17.0
使用kubeadm快速部署一套K8S集群
Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,仅用于尝试Kubernetes或日常开发的用户使用。部署地址:https://kubernetes.io/docs/setup/minikube/
没有故事的陈师傅
2019/09/19
7.2K21
使用kubeadm快速部署一套K8S集群
相关推荐
k8s安装redis主从版
更多 >
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档
本文部分代码块支持一键运行,欢迎体验
本文部分代码块支持一键运行,欢迎体验