前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >空间转录组网络图的分步绘制(Xenium)

空间转录组网络图的分步绘制(Xenium)

原创
作者头像
追风少年i
发布2024-06-05 11:19:34
720
发布2024-06-05 11:19:34

作者,Evil Genius

工作要劳逸结合,我们来画画图吧。

很多人对公司更新生信分析内容感兴趣,其实公司的更新就是要超前化、专业化、自动化、流程化,当然还有调研很多的方法实现个性化。

先加载

代码语言:javascript
复制
from matplotlib import patches as mpatches
from mpl_chord_diagram import chord_diagram
from mpl_toolkits.axes_grid1.anchored_artists import AnchoredSizeBar
from scipy.stats import f_oneway
from statsmodels.stats.multicomp import pairwise_tukeyhsd
import matplotlib as mpl
import matplotlib.font_manager as fm
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scanpy as sc
import seaborn as sns
import squidpy as sq
from matplotlib.lines import Line2D
from scipy import stats

有一些定义的函数,我们放在最后

读取数据

代码语言:javascript
复制
adata = sc.read('./EAE_rev_ingest_only_new_v4.h5ad') 
adata 

基础分析全部做了,包括注释、距离分析等等

数据注释

代码语言:javascript
复制
adata.obsm["spatial"] = adata.obs[["x", "y"]].copy().to_numpy()
adata = adata[adata.obs['Annotation 3.5'] != 'Unknown']
mapping = {'Astro': 'Astro',
 'CAM': 'CAM',
 'DA-Astro': 'DA-Astr',
 'DA-MOL2': 'DA-MOL2',
 'DA-MOL5/6': 'DA-MOL5/6',
 'DA-MiGL': 'DA-MiGL',
 'DA-OPC/COP': 'DA-OPC/COP',
 'DC': 'DC',
 'Ep': 'Ep',
 'Ep-Neu': 'Ep',
 'MC/d': 'MC/d',
 'MOL2': 'MOL2',
 'MOL5/6': 'MOL5/6',
 'MiGL': 'MiGL',
 'NFOL/MFOL': 'NFOL/MFOL',
 'Neu-Chol': 'Neuron',
 'Neu-Ex-Glu': 'Neuron',
 'Neu-In-GABA': 'Neuron',
 'Neu-In-Gly': 'Neuron',
 'OPC/COP': 'OPC/COP',
 'Per': 'Vascular',
 'Schw': 'Schw',
 'T-cell': 'T-cell',
 'VEC': 'Vascular',
 'VEC-Per': 'Vascular',
 'VLMC': 'Vascular'}
adata.obs['interaction_annotations'] = adata.obs['Annotation 3.5'].map(mapping)

color_interaction_anno = {'MOL2': '#7c4dffff','MOL5/6': '#311b92ff','DA-MOL5/6': '#58104dff','DA-MOL2': '#941b81ff','DA-Astr': '#25cc44ff','DA-MiGL': '#ff9300ff','MiGL': '#ffdb00ff','Astro': '#009051ff','DA-MiGL': '#ff9300ff','MC/d': '#ff009dff','DC': '#b8860bff','T-cell': '#f8ff00ff','Neuron': '#133c55ff','NFOL/MFOL': '#7d1b94ff','Vascular': '#ff0000ff','Schw': '#929000ff','CAM': '#ff00f5ff','OPC/COP': '#9e8ac0','DA-OPC/COP': '#ff87bbff','Ep': '#ffc0cbff'}
adata.obs['interaction_annotations_colors'] = adata.obs['interaction_annotations'].map(color_interaction_anno)
sc.pl.umap(adata, color = 'interaction_annotations', palette = color_interaction_anno)

计算邻域

代码语言:javascript
复制
sq.gr.spatial_neighbors(adata, library_key = 'sample_id', coord_type="generic", delaunay=False,  n_neighs=5) 

每个样本的空间网络图

代码语言:javascript
复制
ad_list = []
for sample in adata.obs.sample_id.unique(): 
    ad_sp = adata[adata.obs.sample_id == sample]
    if '_B_' in sample:
        radius = 300
        size = 20 
    else:
        radius = 180
        size = 60 
    sq.pl.spatial_scatter(ad_sp, 
                          color = 'interaction_annotations',
                          #coords=adata.obsm['spatial'],
                         # crop_coord=region_subset_dict[sample],
                          size= size,shape=None,
                          figsize=(20, 20), 
                          connectivity_key = 'spatial_connectivities', 
                         )#save = sample+'.svg')
    plt.savefig('%s.spatial.graph.png'%(sample))
    ad_list.append(ad_sp)

subset,为什么要做这个呢,因为一张Xenium芯片上了多个样本(毕竟太贵了),所以要把单个样本提取出来

代码语言:javascript
复制
region_subset_dict  = {
    'R03_L_EAE_EARLY': [ 2500, 4500,10000, 12000],
     'R02_L_EAE_PEAK': [7500, 9500, 7750, 9750],
    'R05_L_EAE_LATE': [11000,13000,9500,11500]
                }

绘图

代码语言:javascript
复制
compartment_dictionary = {
'CNTRL':['WM','GM','Corpus callosum'],
'EAE':['WM',
    'GM',
    'PL',
    'LR',
    'LC',
    'LL',
     'Corpus callosum',
     'DA-region',]
}
adata.obs['interaction_grouping'] = adata.obs.type.astype(str) + '_' + adata.obs.batch.astype(str) + '_' +adata.obs.compartment.astype(str) 

interaction_grouping = ['CNTRL_spinal cord_WM', 'CNTRL_spinal cord_GM', 'EAE_spinal cord_WM','EAE_spinal cord_GM','EAE_spinal cord_PL', 'EAE_spinal cord_LR', 'EAE_spinal cord_LC', 'EAE_spinal cord_LL', 'EAE_brain_DA-region','EAE_brain_Corpus callosum']

level_ = 'interaction_annotations'

region_subset_dict = {'R1_L_EAE_PEAK': [(9500,11000,16000,18000)],'R6_L_EAE_LATE':[(3000,5000,16000,18000)]}

ad_sp.obs[['x','y']]

region_subset_dict = {'R1_B_EAE_PEAK':[23000, 25000, 25000, 27000],'R1_C_EAE_PEAK':[16000, 18000, 11000, 13000], 'R2_T_EAE_PEAK':[7500, 9500, 800, 2800],'R2_L_EAE_PEAK':[12000, 14000, 3500, 5500],'R5_L_EAE_LATE':[12000, 14000, 10000, 12000]}

adata_SC = adata[adata.obs.region.isin(['L'])]
adata_B = adata[adata.obs.batch == 'brain']
adata_SC.obs.time.unique()

adata_SC_peak = adata_SC[adata_SC.obs.time == 'PEAK']
adata_SC_peak = adata_SC[adata_SC.obs.time == 'PEAK']

def get_range(list_int):
    import numpy as np
    # Define your list of floats
    data = list_int
    # Determine the range of values in the data set
    data_range = max(data) - min(data)
    # Calculate the bin size based on the desired number of bins
    num_bins = 5
    bin_size = data_range / num_bins
    # Use the histogram function to get the frequency counts and bin edges
    counts, edges = np.histogram(data, bins=num_bins)
    # Create a list of representative integers based on the bin edges
    integers = [int(round(edge)) for edge in edges]
    # Print the results
    print(f"Counts: {counts}")
    print(f"Bin Edges: {integers}") 
    return integers

path = './'
ad_list_2 = {}
for int_group in ['EAE']:
    print(int_group)
    adata_int2 = adata_SC[adata_SC.obs['type'] == int_group]
    for time in adata_int2.obs.time.unique():
        print(time)
        adata_int = adata_int2[adata_int2.obs['time'] == time]
        sq.gr.nhood_enrichment(adata_int, cluster_key=level_)
        sq.pl.nhood_enrichment(adata_int, cluster_key=level_)
        adata_int.uns[level_+'_nhood_enrichment']['zscore'] = np.nan_to_num(adata_int.uns[level_+'_nhood_enrichment']['zscore'])
        colors =pd.DataFrame(dict(zip(adata.obs['interaction_annotations'].cat.categories,adata.uns['interaction_annotations_colors'])).values())#pd.DataFrame(adata_int.uns[level_+'_colors'])
        for_eneritz = pd.DataFrame(adata_int.uns[level_+"_nhood_enrichment"]["zscore"])
        for_eneritz.index = adata_int.obs['interaction_annotations'].cat.categories
        for_eneritz.columns = adata_int.obs['interaction_annotations'].cat.categories
        #for_eneritz.to_csv(path+'SC_'+int_group+'_'+time+'_interaction_matrix.csv')
        size = pd.DataFrame(adata_int.obs[level_].value_counts())
        print(size)
        # create network plot
        import networkx as nx
        import matplotlib.pyplot as plt
        G = nx.Graph()
        nodes = adata_int.obs[level_].cat.categories
        categories = pd.DataFrame(adata_int.obs[level_].cat.categories)
        colors['cells'] = categories
        nodes2 = []
        for i,node in enumerate(((nodes))):
            for j in range(i+1, len(nodes)):
                zscore = adata_int.uns[level_+"_nhood_enrichment"]["zscore"][i, j]
                pval = stats.norm.sf(abs(zscore))*2
                if zscore>1:
                    G.add_edge(nodes[i], nodes[j], weight=(zscore))
        pos = nx.spring_layout(G, k=0.5, seed=42)
        size = size[size.index.isin(pos.keys())]
        size = size.sort_index()
        colors = colors[colors.cells.isin(pos.keys())]
        colors = dict(zip(colors['cells'], colors[0]))
        edge_widths = [d['weight'] for u, v, d in G.edges(data=True)]
        size = dict(zip(size.index, size['interaction_annotations']))
        node_size = [size[node] for node in G.nodes()]
        node_colors = [colors[node] for node in G.nodes()]
        plt.figure(figsize=(10, 10))
        sc = nx.draw_networkx_nodes(G, pos, node_color=node_colors, alpha=0.5, node_size=np.array(node_size)/2)
        nx.draw_networkx_edges(G, pos, edge_color="black", alpha=0.5, width=0.25*(np.array(edge_widths)/5))
        nx.draw_networkx_labels(G, pos, font_size=15, font_color="black")
        plt.axis("off")
        legend1 = plt.legend(*sc.legend_elements("sizes", num=6),  
                   bbox_to_anchor=(1, 1), 
                   prop={'size': 70},
                   title = '# cells in cluster',
                  frameon = False)
        lines = []
        edges_weight_list = sorted(np.array(edge_widths))
        integers = get_range(edges_weight_list)
        for i, width in enumerate(integers):
            lines.append(Line2D([],[], linewidth=0.25*(width/5), color='black'))
        legend2 = plt.legend(lines,integers,prop={'size': 20}, bbox_to_anchor=(0, 0.5), frameon=False, ) 
        plt.gca().add_artist(legend1)
        plt.gca().add_artist(legend2)
        plt.rcParams['pdf.fonttype'] = 42
        plt.rcParams['ps.fonttype'] = 42
        plt.rcParams['svg.fonttype'] = 'none'
        plt.savefig(path+'SC_'+int_group+'_'+time+'_nhood_enrichment.svg', bbox_inches="tight", dpi = 500)
        plt.show()
        sq.gr.interaction_matrix(adata_int, cluster_key=level_, normalized = False)
        #sq.pl.interaction_matrix(adata_int, cluster_key=level_,)# vmax = 5000, method="ward",)
        df = pd.DataFrame(adata_int.uns[level_+'_interactions'])
        df_filt = df#[df.sum() > df.sum().quantile(0.6)]
        df_filt = df_filt.T
        df_filt = df_filt[df_filt.index.isin(df_filt.columns)]
        colors =pd.DataFrame(adata_int.uns[level_+'_colors'])
        colors = colors[colors.index.isin(df_filt.columns)][0]
        categories = pd.DataFrame(adata_int.obs[level_].cat.categories)
        categories = categories[categories.index.isin(df_filt.columns)][0]
        df_filt.index = categories
        df_filt.columns = categories
        import random
        randomlist = []
        for i in range(0,19):
            n = random.uniform(0,1,)
            randomlist.append(n)
        #df.index= adata_int.obs.level3.cat.categories
        #df.columns= adata_int.obs.level3.cat.categories
        with plt.rc_context({'figure.figsize': (10, 10), 'figure.dpi':100}):
            chord_diagram(df_filt, names = list(categories), 
                          rotate_names = True, fontcolor = 'black',
                          fontsize=10,colors = list(colors), alpha = 0.90,
                         sort = 'distance', use_gradient= True, show= False)
            plt.rcParams['pdf.fonttype'] = 42
            plt.rcParams['ps.fonttype'] = 42
            plt.rcParams['svg.fonttype'] = 'none'
            plt.savefig(path+'SC_'+int_group+'_'+time+'_interaction_matrix.svg', bbox_inches="tight")
        plt.show()
代码语言:javascript
复制
ad_list_2 = {}
for int_group in ['EAE']:#,#'CNTRL']:
    print(int_group)
    adata_int2 = adata_B[adata_B.obs['type'] == int_group]
    for region in adata_int2.obs.region.unique():
        print(region)
        adata_int = adata_int2[adata_int2.obs['region'] == region]
        sq.gr.nhood_enrichment(adata_int, cluster_key=level_)
        sq.pl.nhood_enrichment(adata_int, cluster_key=level_)
        adata_int.uns[level_+'_nhood_enrichment']['zscore'] = np.nan_to_num(adata_int.uns[level_+'_nhood_enrichment']['zscore'])
        colors =pd.DataFrame(dict(zip(adata.obs['interaction_annotations'].cat.categories,adata.uns[level_+'_colors'])).values())#pd.DataFrame(adata_int.uns[level_+'_colors'])
        for_eneritz = pd.DataFrame(adata_int.uns[level_+"_nhood_enrichment"]["zscore"])
        for_eneritz.index = adata_int.obs['interaction_annotations'].cat.categories
        for_eneritz.columns = adata_int.obs['interaction_annotations'].cat.categories
        for_eneritz.to_csv(path+'B_'+int_group+'_'+time+'_interaction_matrix.csv')
        size = pd.DataFrame(adata_int.obs[level_].value_counts())
        print(size)
        # create network plot
        import networkx as nx
        import matplotlib.pyplot as plt
        G = nx.Graph()
        nodes = adata_int.obs[level_].cat.categories
        categories = pd.DataFrame(adata_int.obs[level_].cat.categories)
        colors['cells'] = categories
        nodes2 = []
        for i,node in enumerate(((nodes))):
            for j in range(i+1, len(nodes)):
                pval = adata_int.uns[level_+"_nhood_enrichment"]["zscore"][i, j]
                if pval>-1:
                    G.add_edge(nodes[i], nodes[j], weight=(pval))
        pos = nx.spring_layout(G,  k=0.15,seed=42)
        size = size[size.index.isin(pos.keys())]
        size = size.sort_index()
        colors = colors[colors.cells.isin(pos.keys())]
        colors = dict(zip(colors['cells'], colors[0]))
        edge_widths = [d['weight'] for u, v, d in G.edges(data=True)]
        size = dict(zip(size.index, size[level_]))
        node_size = [size[node] for node in G.nodes()]
        node_colors = [colors[node] for node in G.nodes()]        
        plt.figure(figsize=(20, 20))
        sc = nx.draw_networkx_nodes(G, pos, node_color=node_colors, alpha=0.5, node_size=np.array(node_size)/2)
        nx.draw_networkx_edges(G, pos, edge_color="black", alpha=0.5, width=0.25*(np.array(edge_widths)/5))
        nx.draw_networkx_labels(G, pos, font_size=20, font_color="black")
        plt.axis("off")        
        legend1 = plt.legend(*sc.legend_elements("sizes", num=6),  
                   bbox_to_anchor=(1, 1), 
                   prop={'size': 80},
                   title = '# cells in cluster',
                  frameon = False)        
        lines = []
        edges_weight_list = sorted(np.array(edge_widths))
        integers = get_range(edges_weight_list)
        for i, width in enumerate(integers):
            lines.append(Line2D([],[], linewidth=0.25*(width/5), color='black'))
        legend2 = plt.legend(lines,integers,prop={'size': 20}, bbox_to_anchor=(0, 0.5), frameon=False, )         
        plt.gca().add_artist(legend1)
        plt.gca().add_artist(legend2)
        plt.rcParams['pdf.fonttype'] = 42
        plt.rcParams['ps.fonttype'] = 42
        plt.rcParams['svg.fonttype'] = 'none'
        plt.savefig(path+'B_'+int_group+'_nhood_enrichment.svg', bbox_inches="tight", dpi = 500)     
        sq.gr.interaction_matrix(adata_int, cluster_key=level_, normalized = False)
        #sq.pl.interaction_matrix(adata_int, cluster_key=level_,)# vmax = 5000, method="ward",)
        df = pd.DataFrame(adata_int.uns[level_+'_interactions'])
        df_filt = df#[df.sum() > df.sum().quantile(0.6)]
        df_filt = df_filt.T
        df_filt = df_filt[df_filt.index.isin(df_filt.columns)]
        colors =pd.DataFrame(adata_int.uns[level_+'_colors'])
        colors = colors[colors.index.isin(df_filt.columns)][0]
        categories = pd.DataFrame(adata_int.obs[level_].cat.categories)
        categories = categories[categories.index.isin(df_filt.columns)][0]
        df_filt.index = categories
        df_filt.columns = categories
        import random
        randomlist = []
        for i in range(0,19):
            n = random.uniform(0,1,)
            randomlist.append(n)
        #df.index= adata_int.obs.level3.cat.categories
        #df.columns= adata_int.obs.level3.cat.categories
        with plt.rc_context({'figure.figsize': (10, 10), 'figure.dpi':100}):
            chord_diagram(df_filt, names = list(categories), 
                          rotate_names = True, fontcolor = 'black',
                          fontsize=10,colors = list(colors), alpha = 0.90,
                         sort = 'distance', use_gradient= True, show= False)
            plt.rcParams['pdf.fonttype'] = 42
            plt.rcParams['ps.fonttype'] = 42
            plt.rcParams['svg.fonttype'] = 'none'
            plt.savefig(path+'B_'+int_group+'_'+time+'_interaction_matrix.svg', bbox_inches="tight")
        plt.show()

生活很好,有你更好

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 作者,Evil Genius
  • 工作要劳逸结合,我们来画画图吧。
  • 很多人对公司更新生信分析内容感兴趣,其实公司的更新就是要超前化、专业化、自动化、流程化,当然还有调研很多的方法实现个性化。
  • 先加载
  • 有一些定义的函数,我们放在最后
  • 读取数据
  • 基础分析全部做了,包括注释、距离分析等等
  • 数据注释
  • 计算邻域
  • 每个样本的空间网络图
  • subset,为什么要做这个呢,因为一张Xenium芯片上了多个样本(毕竟太贵了),所以要把单个样本提取出来
  • 生活很好,有你更好
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档