一、环境说明

  1. 华为数据仓库服务DWS,集群版本8.1.3.320
  2. 集群拓扑结构:
    在这里插入图片描述

二、数据分布式方式

DWS采用水平分表的方式,将业务数据表的元组打散存储到各个节点内。这样带来的好处在于,查询中通过查询条件过滤不必要的数据,快速定位到数据存储位置,可极大提升数据库性能。
水平分表方式将一个数据表内的数据,按合适分布策略分散存储在多个节点内,DWS支持如表1所示的数据分布策略。用户可在CREATE TABLE时指定DISTRIBUTE BY参数进行分片,指定PARTITION BY对分片进行一步分区,以此来组织数据的分布方式。

数据分布式方式如下
在这里插入图片描述

三、实验验证

  1. 以下操作是在gsql上,使用系统用户dbadmin登录,登录语句如下:
    gsql -d postgres -p 8000 -h xxx.249.99.67 -U dbadmin -W xxxxx@1234 -r
    
  2. 创建hash分区表
    create table part01(id int not null,name varchar(50))
    distribute by hash(id) 
    PARTITION BY RANGE(id)(
    PARTITION p1 VALUES LESS THAN (3),
    PARTITION p2 VALUES LESS THAN (5),
    PARTITION p3 VALUES LESS THAN (7),
    PARTITION p4 VALUES LESS THAN (MAXVALUE )
    );
    
    查看该表结构:
    sjmdb=> \d+ part01
                               Table "public.part01"
     Column |         Type          | Modifiers | Storage  | Stats target | Description
    --------+-----------------------+-----------+----------+--------------+-------------
     id     | integer               | not null  | plain    |              |
     name   | character varying(50) |           | extended |              |
    Range partition by(id)
    Number of partition: 4 (View pg_partition to check each partition range.)
    Has OIDs: no
    Distribute By: HASH(id)
    Location Nodes: ALL DATANODES
    Options: orientation=row, compression=no
    
    查看该表的oid:
    sjmdb=> select oid,relname from pg_class where relname='part01';
    oid  | relname
    -------+---------
     36839 | part01
    (1 row)
    
    根据该表的oid查看该表的分区信息:
    
    sjmdb=> select relname,parttype,parentid,boundaries from pg_partition where parentid=36839;
      relname      | parttype | parentid | boundaries
    -------------------+----------+----------+------------
     part01            | r        |    36839 |
     p1                | p        |    36839 | {3}
     p2                | p        |    36839 | {5}
     p3                | p        |    36839 | {7}
     p4                | p        |    36839 | {NULL}
    (29 rows)
    
  3. 新增测试数据
    insert into part01 values(1,'test1'),(2,'test2'),(3,'test3'),(4,'test4'),(5,'test5'),(6,'test6');
    
  4. 查看DN名称
    SELECT node_name,node_type,node_host FROM pgxc_node where node_type='D';
    
  5. 查看每个DN上分区中的数据
    sjmdb=> execute direct on(dn_6001_6002) 'select * from part01 partition (p1)';
     id | name
    ----+------
    (0 rows)
    sjmdb=> execute direct on(dn_6001_6002) 'select * from part01 partition (p2)';
     id | name
    ----+-------
      3 | test3
    (1 row)
    
    sjmdb=> execute direct on(dn_6001_6002) 'select * from part01 partition (p3)';
     id | name
    ----+------
    (0 rows)
    
    sjmdb=> execute direct on(dn_6001_6002) 'select * from part01 partition (p4)';
     id | name
    ----+------
    (0 rows)
    
    sjmdb=> execute direct on(dn_6003_6004) 'select * from part01 partition (p1)';
     id | name
    ----+-------
      1 | test1
      2 | test2
    (2 rows)
    
    sjmdb=> execute direct on(dn_6003_6004) 'select * from part01 partition (p2)';
     id | name
    ----+-------
      4 | test4
    (1 row)
    
    sjmdb=> execute direct on(dn_6003_6004) 'select * from part01 partition (p3)';
     id | name
    ----+-------
      5 | test5
    (1 row)
    
    sjmdb=> execute direct on(dn_6003_6004) 'select * from part01 partition (p4)';
     id | name
    ----+------
    (0 rows)
    
    sjmdb=> execute direct on(dn_6005_6006) 'select * from part01 partition (p1)';
     id | name
    ----+------
    (0 rows)
    
    sjmdb=> execute direct on(dn_6005_6006) 'select * from part01 partition (p2)';
     id | name
    ----+------
    (0 rows)
    
    sjmdb=> execute direct on(dn_6005_6006) 'select * from part01 partition (p3)';
     id | name
    ----+-------
      6 | test6
    (1 row)
    
    sjmdb=> execute direct on(dn_6005_6006) 'select * from part01 partition (p4)';
     id | name
    ----+------
    (0 rows)
    
    得出结论:
    该hash分区表按照分布式键分成了3个分片到3个DN上,每个DN上存储一个分片数据,在每个分片中又划分了4个分区,插入数据是先按照分布式键按照hash散列到对应的分片所在的DN节点,然后再根据分区键及其对应的范围落到具体的某个分区上。
  6. 查询该表分区数据
    sjmdb=> select * from part01 partition (p1);
     id | name
    ----+-------
      1 | test1
      2 | test2
    (2 rows)
    
    sjmdb=> select * from part01 partition (p2);
     id | name
    ----+-------
      3 | test3
      4 | test4
    (2 rows)
    
    sjmdb=> select * from part01 partition (p3);
     id | name
    ----+-------
      6 | test6
      5 | test5
    (2 rows)
    
    sjmdb=> select * from part01 partition (p4);
     id | name
    ----+------
    (0 rows)
    
    sjmdb=>
    
    查看其中一条SQL语句的执行计划:
    sjmdb=> explain verbose select * from part01 partition (p1);
                                                 QUERY PLAN
    -------------------------------------------------------------------------------------------------------------
      id |                    operation                    | E-rows | E-distinct | E-memory | E-width | E-costs
     ----+-------------------------------------------------+--------+------------+----------+---------+---------
       1 | ->  Streaming (type: GATHER)                    |      5 |            |          |     122 | 8.30
       2 |    ->  Partition Iterator                       |      6 |            | 1MB      |     122 | 0.25
       3 |       ->  Partitioned Seq Scan on public.part01 |      6 |            | 1MB      |     122 | 0.25
    
     Predicate Information (identified by plan id)
     ----------------------------------------------
       2 --Partition Iterator
             Iterations: 1
       3 --Partitioned Seq Scan on public.part01
             Partitions Selected by Static Prune: 1
    
     Targetlist Information (identified by plan id)
     ----------------------------------------------
       1 --Streaming (type: GATHER)
             Output: id, name
             Node/s: All datanodes
       2 --Partition Iterator
             Output: id, name
       3 --Partitioned Seq Scan on public.part01
             Output: id, name
             Distribute Key: id
    
             ====== Query Summary =====
     ------------------------------------------
     System available mem: 8601600KB
     Query Max mem: 8601600KB
     Query estimated mem: 2048KB
     Initial DOP: 8
     Avail(CPU/IO)/Max core: (8.00/8.00)/8.00
     CPU/IO/Task util: 8.00/0.00/0
     Running/Active/Max statement: 0/0/21474836
     Final DOP: 1
    (34 rows)
    
    结论:
    查看该hash分区表中分区的数据时,CN会将查询语句下发到该分区表所在的分片对应的DN节点上,由对应的DN节点查询出结构后,再返回给CN,由CN进行汇总后,再返回给客户端。

四、测试结论

  1. 写入hash分区表的流程:该hash分区表按照分布式键分成了3个分片到3个DN上,每个DN上存储一个分片数据,在每个分片中又划分了4个分区,插入数据是先按照分布式键按照hash散列到对应的分片所在的DN节点,然后再根据分区键及其对应的范围落到具体的某个分区上。
  2. 读取hash分区表分区数据的流程:查看该hash分区表中分区的数据时,CN会将查询语句下发到该分区表所在的分片对应的DN节点上,由对应的DN节点查询出结构后,再返回给CN,由CN进行汇总后,再返回给客户端。
Logo

开放原子开发者工作坊旨在鼓励更多人参与开源活动,与志同道合的开发者们相互交流开发经验、分享开发心得、获取前沿技术趋势。工作坊有多种形式的开发者活动,如meetup、训练营等,主打技术交流,干货满满,真诚地邀请各位开发者共同参与!

更多推荐