首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Spark Streaming MYsql

Spark Streaming MYsql
EN

Stack Overflow用户
提问于 2016-08-03 09:21:21
回答 1查看 1.5K关注 0票数 1

我想获取数据行,这是插入到外部mysql数据库后每2分钟。我想做这件事与火花流。

但是在程序运行一次time.So后,我得到了这个错误,它第一次给了我数据,但在那之后,我得到了以下错误,程序终止了

我得到的错误是

代码语言:javascript
复制
16/08/02 11:15:44 INFO JdbcRDD: closed connection
16/08/02 11:15:44 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 620 bytes result sent to driver
16/08/02 11:15:44 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 451 ms on localhost (1/1)
16/08/02 11:15:44 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
16/08/02 11:15:44 INFO DAGScheduler: ResultStage 0 (foreach at databaseread.scala:33) finished in 0.458 s
16/08/02 11:15:44 INFO DAGScheduler: Job 0 finished: foreach at databaseread.scala:33, took 0.664559 s
16/08/02 11:15:44 ERROR StreamingContext: Error starting the context, marking it as stopped
java.lang.IllegalArgumentException: requirement failed: No output operations registered, so nothing to execute
    at scala.Predef$.require(Predef.scala:224)
    at org.apache.spark.streaming.DStreamGraph.validate(DStreamGraph.scala:163)
    at org.apache.spark.streaming.StreamingContext.validate(StreamingContext.scala:543)
    at org.apache.spark.streaming.StreamingContext.liftedTree1$1(StreamingContext.scala:595)
    at org.apache.spark.streaming.StreamingContext.start(StreamingContext.scala:594)
    at org.test.spark.databaseread$.main(databaseread.scala:41)
    at org.test.spark.databaseread.main(databaseread.scala)
Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: No output operations registered, so nothing to execute
    at scala.Predef$.require(Predef.scala:224)

我将我的代码发布在here.Please上帮助我

代码语言:javascript
复制
package org.test.spark

import org.xml.sax.helpers.NewInstance
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.rdd.JdbcRDD
import java.sql.DriverManager
import org.apache.spark.streaming.StreamingContext._
import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.Seconds

object databaseread {   
         def main(args:Array[String])
         {
              val url="jdbc:mysql://localhost:3306/dbname"
              val uname="root"
              val pwd="root"
              var i=0
              val driver="com.mysql.jdbc.Driver"
              val conf=new SparkConf().setAppName("DBget").setMaster("local")
              val sc=new SparkContext(conf)
              val ssc = new StreamingContext(sc, Seconds(60))

         val RDD=new JdbcRDD(sc,()=>DriverManager.getConnection(url,uname,pwd),
            "select * from crimeweathercoords where ?  
                =?",1,1,1,r=>r.getString("Borough")+","+r.getString("Month"))



        ssc.checkpoint(".")


        ssc.start()
        ssc.awaitTermination()


      }
    }
EN

回答 1

Stack Overflow用户

发布于 2016-08-25 05:56:41

Spark streaming不是为了从MySQL这样的关系型数据库中读取数据而设计的。您可以尝试创建自己的自定义接收器,但在这一点上,只使用常规的spark api来检索数据块可能会更容易。

票数 -1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/38732868

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档