Loading [MathJax]/jax/output/CommonHTML/config.js
社区首页 >问答首页 >Apache中的更新非常慢,如何提高写性能?

Apache中的更新非常慢,如何提高写性能?
EN

Stack Overflow用户
提问于 2018-07-11 05:52:28
回答 1查看 2K关注 0票数 0

我正在做一个POC,在POC中,我使用Pheonix进行一次写入,在写入之后更新数据库。因此,我不能批量更新写入。作为TPS,我每秒得到一个事务。我有3个节点的EMR集群。我使用HBase和S3作为后端。

我尝试了调优参数,我在网上发现,我创建了线程应用程序,但性能仍然非常缓慢。

我已经编写了一个线程程序来批量插入数据到菲尼克斯。我使用菲尼克斯是因为二级索引能力。我在写作方面的表现非常缓慢。

解释查询如下所示

0: jdbc:菲尼克斯:localhost:2181:/hbase> VBQL_PHOENIX_TRANSCRIPT5中的选择计数(1)。。。。。。。。。。。。。。。。。。.>;+---------------------------------------------------------------------------------------------------------------+-计划

\x{e76f}+-------------------------------------------------------------+

\x{e76f}\x{e76f}= 314572800 \x{e76f}{##**$$}{##**$}{##**$}

\x{e76f} 314572800 \x{e76f} 6838 \+-------------------------------------------------------------HBase中使用的+问题是很难扩展,我尝试在Hbase集群中添加更多的节点,我也尝试向客户机程序中添加更多的线程,但是它不会超过每分钟6K,这是非常慢的。任何帮助都是非常感谢的。

代码语言:javascript
代码运行次数:0
复制
 <property> 
      <name>index.writer.threads.max</name> 
      <value>30</value> 
 </property> 
 <property> 
      <name>index.builder.threads.max</name> 
      <value>30</value> 
 </property> 
 <property> 
      <name>phoenix.query.threadPoolSize</name> 
      <value>256</value> 
 </property> 
 <property> 
      <name>index.builder.threads.keepalivetime</name> 
      <value>90000</value> 
 </property> 
<property> 
      <name>phoenix.query.timeoutMs</name> 
      <value>90000</value> 
 </property>
<property> 
      <name>phoenix.default.update.cache.frequency</name> 
      <value>300000</value> 
 </property

我创建了一个表,如下所示

代码语言:javascript
代码运行次数:0
复制
CREATE TABLE IF NOT EXISTS VBQL_PHOENIX_TRANSCRIPT ( PK VARCHAR NOT NULL PRIMARY KEY, IMMUTABLES.VBMETAJSON VARCHAR, IMMUTABLES.ACCOUNTID VARCHAR, IMMUTABLES.DATECREATED VARCHAR, IMMUTABLES.DATEFINISHED VARCHAR, IMMUTABLES.MEDIAID VARCHAR, IMMUTABLES.JOBID VARCHAR, IMMUTABLES.STATUS VARCHAR, UPDATABLE.METADATA VARCHAR, CATEGORIES.C_ACOUNTID_CATEGORYNAME VARCHAR, COMPUTED.ADDITIONALMETRICS VARCHAR) SALT_BUCKETS =100;

with secondary index like this:

CREATE INDEX  VBQL_PHOENIX_TRANSCRIPT_INDEX5  ON  VBQL_PHOENIX_TRANSCRIPT5 (IMMUTABLES.MEDIAID) ;


Sample Upsert
UPSERT INTO VBQL_PHOENIX_TRANSCRIPT2  ( PK , IMMUTABLES.ACCOUNTID , IMMUTABLES.DATECREATED , IMMUTABLES.DATEFINISHED ,
IMMUTABLES.MEDIAID , IMMUTABLES.JOBID , IMMUTABLES.STATUS  ) 
VALUES ('5DAD32BA-9656-41F3-BD38-BBF890B85CD62018-05-18T18:09:38.60700005D681A95C-8CDA-47B2-93BE-C165B1DEC7D8', 'AAAAAAAAAAAAAAA5DAD32BA-9656', 
'2018-04-18T18:09:38.607+0000', '2018-05-18T18:09:38.607+0000','5D681A95C-8CDA-47B2-93BE-C165B1DEC7D8', 'JOB123', 'FINISHED');


HBASE IS INSTALLED ON EMR CLUSTER HERE TABLE IS CREATED USING ABOVE CREATE TABLE COMMANDS

EMR Cluster is 4 node m4.4xlarge cluster (32 vCore, 64 GiB memory, EBS only storage
EBS Storage:32 GiB)

Client is a Java program running in EC2 (m4.10xlarge m4.10xlarge    40 CPU  160  RAM 10 GiB Network EBS Only    10 Gbps 4,000 Mbps) Client is a multithreaded program that creates atomic connection to Hbase and performs inserts.


CLIENT hbase-site.xml looks like following:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
     /**
 *
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
-->
<configuration>
  <property>
  <name>hbase.regionserver.wal.codec</name>
  <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
</property>

  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>

  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>10.16.129.55</value>
  </property>

  <property>
    <name>hbase.rootdir</name>
    <value>s3://dev-mock-transcription/</value>
  </property>

  <property>
    <name>dfs.support.append</name>
    <value>true</value>
  </property>

  <property>
    <name>hbase.rest.port</name>
    <value>8070</value>
  </property>


  <property>
    <name>hbase.replication</name>
    <value>false</value>
  </property>

  <property>
    <name>hbase.balancer.tablesOnMaster</name>
    <value>hbase:meta</value>
  </property>

  <property>
    <name>hbase.bucketcache.size</name>
    <value>8192</value>
  </property>

  <property>
    <name>hbase.master.balancer.uselocality</name>
    <value>false</value>
  </property>

  <property>
    <name>hbase.master.startup.retainassign</name>
    <value>false</value>
  </property>

  <property>
    <name>hbase.wal.dir</name>
    <value>hdfs://10.16.129.55:8020/user/hbase/WAL</value>
  </property>

  <property>
    <name>hbase.bulkload.retries.retryOnIOException</name>
    <value>true</value>
  </property>

  <property>
    <name>hbase.bucketcache.ioengine</name>
    <value>files:/mnt1/hbase/bucketcache</value>
  </property>

   <property>
      <name>hbase.rpc.timeout</name>
      <value>1800000</value>
    </property>


  <property>
      <name>phoenix.query.timeoutMs</name>
      <value>18000000</value>
                    </property>
     <property>
      <name>phbase.regionserver.lease.period</name>
      <value>18000000</value>
    </property>

      <property>
      <name>hbase.client.scanner.caching</name>
      <value>180000</value>
    </property>

      <property>
      <name>phbase.client.scanner.timeout.period</name>
      <value>18000000</value>
    </property>

 <property>
      <name>index.writer.threads.max</name>
      <value>30</value>
 </property>
 <property>
      <name>index.builder.threads.max</name>
      <value>30</value>
 </property>
 <property>
      <name>phoenix.query.threadPoolSize</name>
      <value>256</value>
 </property>
 <property>
      <name>index.builder.threads.keepalivetime</name>
      <value>90000</value>
 </property>
<property>
      <name>phoenix.query.timeoutMs</name>
      <value>90000</value>
 </property>


</configuration>



HBASE ENV LOOKS LIKE FOLLOWING:

[ec2-user@ip-10-16-129-55 conf]$ cat hbase-env.sh 
#
#/**
# * Licensed to the Apache Software Foundation (ASF) under one
# * or more contributor license agreements.  See the NOTICE file
# * distributed with this work for additional information
# * regarding copyright ownership.  The ASF licenses this file
# * to you under the Apache License, Version 2.0 (the
# * "License"); you may not use this file except in compliance
# * with the License.  You may obtain a copy of the License at
# *
# *     http://www.apache.org/licenses/LICENSE-2.0
# *
# * Unless required by applicable law or agreed to in writing, software
# * distributed under the License is distributed on an "AS IS" BASIS,
# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# * See the License for the specific language governing permissions and
# * limitations under the License.
# */

# Set environment variables here.

# This script sets variables multiple times over the course of starting an hbase process,
# so try to keep things idempotent unless you want to take an even deeper look
# into the startup scripts (bin/hbase, etc.)

# The java implementation to use.  Java 1.7+ required.
# export JAVA_HOME=/usr/java/jdk1.6.0/

# Extra Java CLASSPATH elements.  Optional.
export HBASE_CLASSPATH=/etc/hadoop/conf

# The maximum amount of heap to use. Default is left to JVM default.
# export HBASE_HEAPSIZE=1G
export HBASE_HEAPSIZE=1024

# Uncomment below if you intend to use off heap cache. For example, to allocate 8G of 
# offheap, set the value to "8G".
# export HBASE_OFFHEAPSIZE=1G

# Extra Java runtime options.
# Below are what we set by default.  May only work with SUN JVM.
# For more on why as well as other possible settings,
# see http://wiki.apache.org/hadoop/PerformanceTuning
export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:CMSInitiatingOccupancyFraction=70 -Dsun.net.inetaddr.ttl=60 -Dnetworkaddress.cache.ttl=60"


# Uncomment one of the below three options to enable java garbage collection logging for the server-side processes.

# This enables basic gc logging to the .out file.
# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps"

# This enables basic gc logging to its own file.
# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .
# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>"

# This enables basic GC logging to its own file with automatic log rolling. Only applies to jdk 1.6.0_34+ and 1.7.0_2+.
# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .
# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M"

# Uncomment one of the below three options to enable java garbage collection logging for the client processes.

# This enables basic gc logging to the .out file.
# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps"

# This enables basic gc logging to its own file.
# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .
# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>"

# This enables basic GC logging to its own file with automatic log rolling. Only applies to jdk 1.6.0_34+ and 1.7.0_2+.
# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .
# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M"

# See the package documentation for org.apache.hadoop.hbase.io.hfile for other configurations
# needed setting up off-heap block caching. 

# Uncomment and adjust to enable JMX exporting
# See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.
# More details at: http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
# NOTE: HBase provides an alternative JMX implementation to fix the random ports issue, please see JMX
# section in HBase Reference Guide for instructions.

# export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10101"
# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10102"
# export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10103"
# export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10104"
# export HBASE_REST_OPTS="$HBASE_REST_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10105"

# File naming hosts on which HRegionServers will run.  $HBASE_HOME/conf/regionservers by default.
# export HBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers

# Uncomment and adjust to keep all the Region Server pages mapped to be memory resident
#HBASE_REGIONSERVER_MLOCK=true
#HBASE_REGIONSERVER_UID="hbase"

# File naming hosts on which backup HMaster will run.  $HBASE_HOME/conf/backup-masters by default.
# export HBASE_BACKUP_MASTERS=${HBASE_HOME}/conf/backup-masters

# Extra ssh options.  Empty by default.
# export HBASE_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR"

# Where log files are stored.  $HBASE_HOME/logs by default.
# export HBASE_LOG_DIR=${HBASE_HOME}/logs

# Enable remote JDWP debugging of major HBase processes. Meant for Core Developers 
# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070"
# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071"
# export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8072"
# export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8073"

# A string representing this instance of hbase. $USER by default.
# export HBASE_IDENT_STRING=$USER

# The scheduling priority for daemon processes.  See 'man nice'.
# export HBASE_NICENESS=10

# The directory where pid files are stored. /tmp by default.
# export HBASE_PID_DIR=/var/hadoop/pids

# Seconds to sleep between slave commands.  Unset by default.  This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HBASE_SLAVE_SLEEP=0.1

# Tell HBase whether it should manage it's own instance of Zookeeper or not.
# export HBASE_MANAGES_ZK=true

# The default log rolling policy is RFA, where the log file is rolled as per the size defined for the 
# RFA appender. Please refer to the log4j.properties file to see more details on this appender.
# In case one needs to do log rolling on a date change, one should set the environment property
# HBASE_ROOT_LOGGER to "<DESIRED_LOG LEVEL>,DRFA".
# For example:
# HBASE_ROOT_LOGGER=INFO,DRFA
# The reason for changing default to RFA is to avoid the boundary case of filling out disk space as 
# DRFA doesn't put any cap on the log size. Please refer to HBase-5655 for more context.

export HBASE_MANAGES_ZK=false
export HBASE_DAEMON_DEFAULT_ROOT_LOGGER=INFO,DRFA
export HBASE_DAEMON_DEFAULT_SECURITY_LOGGER=INFO,DRFAS
export HBASE_CLASSPATH=${HBASE_CLASSPATH}${HBASE_CLASSPATH:+:}$(ls -1 /usr/lib/phoenix/phoenix-*-HBase-*-server.jar)

AND SERVER SIDE hbase-site.xmllooks like this:
<configuration>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>

  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>ip-10-16-129-55.ec2.internal</value>
  </property>

  <property>
    <name>hbase.rootdir</name>
    <value>s3://dev-mock-transcription/</value>
  </property>

  <property>
    <name>dfs.support.append</name>
    <value>true</value>
  </property>

  <property>
    <name>hbase.rest.port</name>
    <value>8070</value>
  </property>

  <property>
    <name>hbase.replication</name>
    <value>false</value>
  </property>

  <property>
    <name>hbase.balancer.tablesOnMaster</name>
    <value>hbase:meta</value>
  </property>

  <property>
    <name>hbase.bucketcache.size</name>
    <value>8192</value>
  </property>

  <property>
    <name>hbase.master.balancer.uselocality</name>
    <value>false</value>
  </property>

  <property>
    <name>hbase.master.startup.retainassign</name>
    <value>false</value>
  </property>

  <property>
    <name>hbase.wal.dir</name>
    <value>hdfs://ip-10-16-129-55.ec2.internal:8020/user/hbase/WAL</value>
  </property>

  <property>
    <name>hbase.bulkload.retries.retryOnIOException</name>
    <value>true</value>
  </property>

  <property>
    <name>hbase.bucketcache.ioengine</name>
    <value>files:/mnt1/hbase/bucketcache</value>
  </property>



<property>
      <name>hbase.rpc.timeout</name>
      <value>180000</value>
    </property> 
  <property>
      <name>index.writer.threads.max</name>
      <value>30</value>
 </property>
 <property>
      <name>index.builder.threads.max</name>
      <value>30</value>
 </property>
 </configuration>
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2018-07-19 05:20:17

  1. 对于大型写作,我使用火花作业与凤凰火花,我能够达到16k线上升每秒与4个执行者。
  2. 计数查询非常慢是正常的,因为正如您的解释计划所示,它正在执行完全扫描。这意味着它必须读取所有行才能得到实际的计数。菲尼克斯在使用其主键或至少开始查询数据时非常强大。主键的开始越多,速度就越快。
代码语言:javascript
代码运行次数:0
复制
```javascript

0: jdbc:phoenix:>解释TEST_TABLE中的选择计数(1);

+------------------------------------------------------------------------------+

计划/计划

+------------------------------------------------------------------------------+

客户端1-块0行0字节并行1路扫描在TEST_TABLE上的全扫描

只按第一个键进行服务器筛选

\x{e76f}服务器聚合为单行

+------------------------------------------------------------------------------+

选择3行(0.039秒)

jdbc:phoenix:>从TEST_TABLE中选择计数(1);

1行选定(0.555秒)

0: jdbc:phoenix:>解释select *从TEST_TABLE那里PK喜欢'toto';

+---------------------------------------------------------------------------------------+

计划/计划

+---------------------------------------------------------------------------------------+

客户端1-块并行,单向循环,罗宾点查找,在TEST_TABLE的1键上

+---------------------------------------------------------------------------------------+

1行选定(0.047秒)

0: jdbc:phoenix:>从PK喜欢‘to%’的TEST_TABLE中选择*;

选定2行(0.142秒)

0: jdbc:phoenix:>解释select *从PK =‘toto’的TEST_TABLE;

+---------------------------------------------------------------------------------------+

计划/计划

+---------------------------------------------------------------------------------------+

客户端1-块并行,单向循环,罗宾点查找,在TEST_TABLE的1键上

+---------------------------------------------------------------------------------------+

1行选定(0.019秒)

0: jdbc:phoenix:>从PK =‘toto’的TEST_TABLE中选择*;

1行选定(0.05秒)

0: jdbc:phoenix:>

代码语言:javascript
代码运行次数:0
复制
票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/51287292

复制
相关文章
jq精简显示(隐藏)文本
全栈若城
2024/02/29
960
jq精简显示(隐藏)文本
精简Python项目的Dockerfile
之前生成的镜像很大,1个G。因为公司的需求是要将所有的代码,配置文件都放到Docker中,并且程序只保留编译过后的pyc文件。也就是说,给客户的是一个Docker镜像 而且Docker镜像是禁止客户访问的,数据库配置信息咋办?这些全部写在了.env的配置文件里,也方便客户去修改操作,然后用docker-compose中的env-file指定一下文件就ok了,但是就算这样,将项目一股脑的塞到一起,不大才怪咧。经过两天的研究,最终将项目精简到了380MB。应该是可以在减少 但是先这样。。记录一下吧 ---- 从
简单、
2018/07/17
1.4K0
Python 代码精简和优化
Python很简单,容易使用,开发效率很高,移植性很好,代码资源也很丰富,被广泛使用。但是Python代码编出来的动态库比较大,python库很全,缺点就是库比较大。
py3study
2020/01/06
1.5K0
Python精简代码实现循环左移循环右移
循环左移原理 拿一个32位的数(4个字节)来说 进行移动八位 如: 0x12345678 rol 8 之后 = 0x34567812 其原理如下: 1.首先左移八位得到 0x345678 2.然后右移24位得到 0x12 最后 0x345678 | 0x12 = 0x34567812 鉴于Python的特殊性.我们只需要32bit数即可. 也就是最后要 & 0xFFFFFFFF 其它移位同理
IBinary
2020/12/22
2.3K0
python『学习之路01』python 购物车精简版
#!/usr/bin/env python # -*- coding: utf-8 -*- # @Time : 2017/11/17 1:52 # @Author : mixiu26 # shopping.py public_list = [ ("iphone",5800), ("Mac Book",11800), ("Bike",1200), ("Wtach",10600), ("Coffee",31), ("Book",66) ] Shopping_list = [] Salary = input("Input your salary: ") if Salary.isdigit(): salary = int(Salary) while True: for item in public_list: print(public_list.index(item),item) # index --- >返回指定元素索引 user_choice = input("请选择您需要的商品 >>>>: ") if user_choice.isdigit(): choice = int(user_choice) if choice >=0 and choice < len(public_list): p_item = public_list[choice] if p_item[1] <= salary: Shopping_list.append(p_item) # 余额扣款: salary -= p_item[1] print("Add %s into shppoing cart, your current balance is \033[31;1m%s\033[0m" %(p_item,salary)) else: print("\033[41;1m你的余额仅剩[%s], 请充值: \033[0m" % salary) else: print("The Product code [%s] your Input is not exist! " % choice) elif user_choice == "q": print("===================== shopping list ========================") for i in Shopping_list: print(i) print("Your current balance: ",salary) exit() else: print("没有查到当前编号对应的商品信息, 请重新输入: ") else: print("没有查到当前编号对应的商品信息, 已退出")
呆呆
2021/05/18
3800
Docker精简项目(Python3.6+Flask1.0+Pipenv)
项目采用最新版本的Flask和现在流行的Pipenv,virtualenv因为配置过程,使用过程都会遇到一些不小的麻烦,所以推荐使用pipenv来管理自己的项目环境。 引入的bigdata镜像里面,我已经封装好了supervisor、gunicorn、nginx,所以配置好nginx.conf和supervisor.conf即可 Dockerfile FROM registry.cn-hangzhou.aliyuncs.com/littleseven/bigdata # 根据我的基础镜像又封装的一个新的镜像
简单、
2018/07/18
1.6K6
手把手教你python画图(精简实例,一
1、不叨叨,直接上代码 import matplotlib.pyplot as plt x = [1,2,3,4,5] y = [0,3,2,7,9] plt.figure() plt.plot(
py3study
2020/01/06
4430
python 显示地图
之前写了一篇, 有份近10年的地震数据,你会怎样用python分析呢? 有人留言说,想要将数据显示地图上。 比如地震网上这种效果。 显示图表的库非常多,这里我们试用一个轻量级第三方 folium 库。 三行代码就可以在本地生成一个render.html地图网页文件。 import folium world_map = folium.Map(location=[28.5, 100.40], zoom_start=4) world_map.save('render.html') 详细使用可以查看其文档。
叶子陪你玩
2022/05/22
1.2K0
python 显示地图
Vuex精简文档
Vuex 通过 store 选项,提供了一种机制将状态从根组件“注入”到每一个子组件中(需调用 Vue.use(Vuex)):
神葳
2021/01/22
8640
Redis 精简笔记
Redis 是用 C 语言开发的一个开源的高性能键值对(key-value)数据库,官方提供测试数据,50 个并发执行 100000 个请求,读的速度是 110000 次/s,写的速度是 81000 次/s ,且 Redis 通过提供多种键值数据类型来适应不同场景下的存储需求,目前为止 Redis 支持的键值数据类型如下:
迷路的朱朱
2023/04/30
7190
Python3 | 练气期,入门精简基础语法!
前面我们已经尝试了Python程序的几种运行方式,并运行了第一个Hello World的Python程序,并且列举了 Python2 与 Python3 的部分差异!
全栈工程师修炼指南
2024/07/16
1440
Python3 | 练气期,入门精简基础语法!
OpenCv库的精简
cmake -G "Unix Makefiles" -DCMAKE_TOOLCHAIN_FILE=..\..\android\android.toolchain.cmake ..\..\..
jerrypxiao
2021/02/22
2.8K0
前端基础精简总结
ES5: String、Number、Boolean、Null、Undefined、Object ES6增: Symbol 其中,object为引用,其他为基本类型
超然
2018/09/27
1.7K0
前端基础精简总结
Python画图显示中文
matplotlib作图时默认设置下为英文,无法显示中文,只需要添加下面两行代码即可
全栈程序员站长
2022/09/05
1.4K0
显示python库路径
[root@localhost doc]# python -c “import sys;print sys.path” ['', '/usr/lib/python24.zip', '/usr/lib/python2.4', '/usr/lib/python2.4/plat-linux2', '/usr/lib/python2.4/lib-tk', '/usr/lib/python2.4/lib-dynload', '/usr/lib/python2.4/site-packages', '/usr/lib/python2.4/site-packages/Numeric', '/usr/lib/python2.4/site-packages/gtk-2.0']
py3study
2020/01/07
1.7K0
PYTHON之显示居中
把字体显示在屏幕的中间 sentence = raw_input("Sentence:") screen_width = 80text_width = len(sentence)box_width = text_width + 6left_margin = (screen_width - box_width) // 2 printprint ' '*left_margin + '+' + '-'*(box_width) + '+'print ' '*(left_margin+2) + '| ' + ' '*
py3study
2020/01/06
9360
Java 代码精简之道
其中:“道”指“规律、道理、理论”,“术”指“方法、技巧、技术”。意思是:“道”是“术”的灵魂,“术”是“道”的肉体;可以用“道”来统管“术”,也可以从“术”中获得“道”。
JAVA葵花宝典
2020/06/04
2.1K0
jmeter组件精简概述
jmeter是基于java语言的压力测试工具,除了通过命令来执行压测脚本,还提供图形界面功能。用户在图形界面中可以设置“测试计划”、“线程组”、“取样器”、“逻辑控制器”、“定时器”、“配置项”、“断言”、“变量”、“观察结果树”、“结果报表”、“结果图”。一开始接触这个工具时就感觉好多名词,记一遍忘一遍,忘一遍再记一遍。为了理清思路,我们可以从测试用例四大要素:条件、输入、执行、预期结果,来理解这些组件。先准备好条件和输入,然后执行测试,当实际结果与预期结果一致时,测试用例通过。正所谓万变不离其宗。
Criss@陈磊
2020/02/14
8940
jmeter组件精简概述
Python高频写法总结:精简代码,提高效率
今天为大家分享 Python高频写法总结:精简代码,提高效率,全文3400字,阅读大约12分钟。
老表
2023/12/13
3200
Python高频写法总结:精简代码,提高效率
虚拟目录+认证精简
NameVirtualHost 172.16.1.15:80 <VirtualHost 172.16.1.15:80> ServerName www.jnds.net DocumentRoot /var/www/html Alias /en "/data/CN" <Directory "data/CN"> AuthType Basic AuthName "test" AuthUserFile /etc/httpd/.htpasswd require user
呆呆
2021/05/17
1.2K0

相似问题

精简Javascript MomentJS函数

10

精简一系列除了+if以外的尝试语句,以加快Python的循环处理。

10

用于SFML的精简程序流的小型库

20

我能精简这个UTF8 8编码程序吗?

40

在Python窗口中显示多个视频

10
添加站长 进交流群

领取专属 10元无门槛券

AI混元助手 在线答疑

扫码加入开发者社群
关注 腾讯云开发者公众号

洞察 腾讯核心技术

剖析业界实践案例

扫码关注腾讯云开发者公众号
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档
查看详情【社区公告】 技术创作特训营有奖征文