Feed aggregator

Debuging triggers referencing new as new old as old values

Tom Kyte - 4 hours 52 min ago
Hi, I'm wondering if there is a way to see in debug mode, the values that belong to the :new and :old trigger variables in Oracle. Thanks!
Categories: DBA Blogs

Alternative to Fast Updating Materialized Views?

Tom Kyte - 4 hours 52 min ago
Due to various external constraints, it is not possible to setup fast refreshes on the remote database for some materialized views. However, these tables have "last modified date" type fields (Triggers that update the column on insert/update). Most of these tables also do not get delete statements during normal use. For those that do have deleted data, there may be audit tables available that could be used to find those records. Is it possible to have the fast-refresh process tap into these fields instead of whatever mechanism is normally used? If we were to roll our own "fast refresh" process, how do we keep a table marked as a materialized view but allow it to be updated without having it attempt to push the changes back? Ideally, we could force a normal dbms_mview.refresh at any time (no resetting in-place materialized view) to do a full refresh in the event that the custom fast refresh process has gotten out of sync.
Categories: DBA Blogs

FlashBack Version Query Filtering

Tom Kyte - 4 hours 52 min ago
Hello everyone, I was excited about FlashBack Version Query feature and have a question I want to track changes on some tables and it seamed that FlashBack Version Query had this feature. For example, I could store last processed SCN, (select t.CURRENT_SCN from v$database t ) and make select like this: <code> select versions_startscn, versions_starttime, versions_endscn, versions_endtime, versions_xid, versions_operation, contract_id from noffer.contract_status_change versions BETWEEN SCN 12477243880505 and 12477244180201 </code> we have archive logging for 3 hour interval. but I need track changes every second, maybe 4-5 times in second. and it would filter changes between SCN. but it seems that it is not filtering, but giving all rows. Then I wrote "where clause" like this : <code> select versions_startscn, versions_starttime, versions_endscn, versions_endtime, versions_xid, versions_operation, contract_id from noffer.contract_status_change versions BETWEEN SCN 12477243880505 and 12477244180201 where versions_startscn is not null or versions_endscn is not null </code> and received results I needed but it's makes full scan and is very slow, makes full scan is there some tricks to make this easily. I just need to know which rows (for this example contract_id changed- inserted, updated or deleted) for the given period (or for this example SCN). Some tables we have, there are 30-40 M per day Or I need to write triggers for tracking data changes? Thank you very much Best Dimitri Gamkrelidze
Categories: DBA Blogs

Last bank used to pay a supplier

Tom Kyte - 4 hours 52 min ago
Hello, I am unconfortable with the query below because it takes about 42 seconds to run and bring about 40000 records. <code>select ss.sup_in , nvl(lb.lastbank,0) from suppliers ss left join ( select bs.sup_tab, bs.sup_pad, bs.sup_in, first_value(bk.bank_in) over (partition by bs.sup_in order by ps.pay_day desc) as lastbank from bills bs inner join bills_payments bp on bs.bill_tab = bp.bill_tab and bs.bill_pad = bp.bill_pad and bs.bill_in = bp.bill_in and bs.org_tab = bp.org_tab and bs.org_pad = bp.org_pad and bs.org_in = bp.org_in inner join payments ps on bp.pay_tab = ps.pay_tab and bp.pay_pad = ps.pay_pad and bp.pay_in = ps.pay_in and bp.org_tab = ps.org_tab and bp.org_pad = ps.org_pad and bp.org_in = ps.org_in inner join banks bk on ps.bank_tab = bk.bank_tab and ps.bank_pad = bk.bank_pad and ps.bank_in = bk.bank_in ) lb on ss.sup_tab = lb.sup_tab and ss.sup_pad = lb.sup_pad and ss.sup_in = lb.sup_in group by ss.sup_in , nvl(lb.lastbank,0)</code> Is there other way to do this? I want to get all suppliers with the last bank used to pay their bills. If there is no payment, show 0. I noticed that the subquery below lists all payments for each supplier and for me, it's not necessary: <code> select bs.sup_tab, bs.sup_pad, bs.sup_in, first_value(bk.bank_in) over (partition by bs.sup_in order by ps.pay_day desc) as lastbank from bills bs inner join bills_payments bp on bs.bill_tab = bp.bill_tab and bs.bill_pad = bp.bill_pad and bs.bill_in = bp.bill_in and bs.org_tab = bp.org_tab and bs.org_pad = bp.org_pad and bs.org_in = bp.org_in inner join payments ps on bp.pay_tab = ps.pay_tab and bp.pay_pad = ps.pay_pad and bp.pay_in = ps.pay_in and bp.org_tab = ps.org_tab and bp.org_pad = ps.org_pad and bp.org_in = ps.org_in inner join banks bk on ps.bank_tab = bk.bank_tab and ps.bank_pad = bk.bank_pad and ps.bank_in = bk.bank_in</code> The commands to create tables and records are available on live sql link. Best Regards, Ciro Stahlschmidt
Categories: DBA Blogs

Data skew and join cardinality estimates

Tom Kyte - 4 hours 52 min ago
Hello Chris/Connor, I am trying to troubleshoot a performance issue with a specific sql and one of the things I am struggling with is estimate of join cardinality in the presence of data skew. I have to admit that I had limited success in coming up with a reproducible test case but would be great if you can bear with me and help me with my question. Below is the test case setup <code>drop table t_parent purge ; drop table t_child purge ; create table t_parent (tc_id, tp_id, tc_padding) as select level, mod(level, 357), dbms_random.string('a',100) from dual connect by level <= 400 ; delete from t_parent where tp_id = 0; commit ; alter table t_parent add constraint t_parent_pk primary key (tc_id) ; CREATE TABLE t_child AS SELECT ROWNUM + 100 tch_id, tc_id, dbms_random.string('a',100) tch_padding FROM t_parent,TABLE ( cast( MULTISET ( SELECT level FROM dual CONNECT BY level <= decode(mod(t_parent.tc_id, 357), 3, exp(10), t_parent.tc_id) ) as sys.odcinumberlist ) ); alter table t_child add constraint t_child_pk primary key (tch_id) ; </code> TP_ID and TC_ID columns in T_PARENT table have parent-child relationship. T_CHILD references T_PARENT table (although an explicit FK is not defined) and I have tried to create data skew in T_CHILD table for TC_ID column where the column has more than 254 distinct values (causing a HEIGHT BALANCED histogram being created on it in order to make this similar to tables in original sql). <code>select tc_id, count(*) from t_child where tc_id in (3,7) group by tc_id ; TC_ID COUNT(*) 7 7 3 22026 exec dbms_stats.gather_table_stats(user,'T_PARENT') ; exec dbms_stats.gather_table_stats(user,'T_CHILD') ; </code> The below simple sql produces close-enough estimates. <code> select /*+ GATHER_PLAN_STATISTICS */ count(*) from t_child where tc_id = 3 ; COUNT(*) 22026 SQL_ID fc7qd7fgvkp0a, child number 0 ------------------------------------- select /*+ GATHER_PLAN_STATISTICS */ count(*) from t_child where tc_id = :"SYS_B_0" Plan hash value: 1228457128 ----------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | ----------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 1884 (100)| 1 |00:00:00.05 | 1988 | | 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00:00.05 | 1988 | |* 2 | TABLE ACCESS FULL| T_CHILD | 1 | 22372 | 1884 (1)| 22026 |00:00:00.04 | 1988 | ----------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("TC_ID"=:SYS_B_0) </code> However, when I join it, the estimates are far from close enough (161 vs 7 or 161 vs 22026). In original sql, these 2 tables are being joined to more tables and this join is acting as a driving rowsource. The original sql is always using NESTED LOOP join when joining this rowsource (join of T_PARENT and T_CHILD) with subsequent table. When this rowsource produces less number of rows (up to 50), the sql performs well but for specific extreme values it produces large number of rows (more than 2000) and performance suffers. Original sql estimates T_CHILD will produce 3 rows vs. actual 2000 for specific cases. Only forcing dynamic sampling at level 11, provides better estimates for extreme cases. Is there a better way to influence optimizer to come up with better estimates for such skewed data? <code>select /*+ ...
Categories: DBA Blogs

External Tables - Skipping Columns

Tom Kyte - 4 hours 52 min ago
Hi I have a problem using external tables. When a field is not populated, it is being ignored and the value is being held in previous column. An example of the sort of SQL I am using is <code>CREATE TABLE "EXT_TABLE_TEST" ("COLUMN1" VARCHAR2(100 BYTE), "COLUMN2" VARCHAR2(100 BYTE), "COLUMN3" VARCHAR2(100 BYTE), "COLUMN4" VARCHAR2(100 BYTE) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "DIR" ACCESS PARAMETERS ( records delimited by newline BADFILE 'Test.bad' LOGFILE 'Test.log' fields terminated by 0x'09' OPTIONALLY ENCLOSED BY '"' MISSING FIELD VALUES ARE NULL ) LOCATION ( "DIR":'Test.txt' ) ) REJECT LIMIT UNLIMITED;</code> If for example I have a file containing <code>a b c d a c d a b d</code> At the moment the data is displaying in the query as <code>a b c d a c d a b d </code> As you can see the data in the 2nd and 3rd rows are not showing the data in the correct columns. If I change the source file as comma separated it seems to enforce the correct column ordering but tab delimited does not. Any ideas whether there is a fix for this? Cheers Ian
Categories: DBA Blogs

Spring Boot Data Elasticsearch using Elastic Cloud on Kubernetes (ECK) on VMware Tanzu Kubernetes Grid Integrated Edition (TKGI)

Pas Apicella - Mon, 2020-07-13 22:50
VMware Tanzu Kubernetes Grid Integrated Edition (formerly known as VMware Enterprise PKS) is a Kubernetes-based container solution with advanced networking, a private container registry, and life cycle management.

In this post I show how to get Elastic Cloud on Kubernetes (ECK) up and running on VMware Tanzu Kubernetes Grid Integrated Edition and how to access it using a Spring Boot Application using Spring Data Elasticsearch.

With ECK, users now have a seamless way of deploying, managing, and operating the Elastic Stack on Kubernetes.

If you have a K8s cluster that's all you need to follow along.

Steps

1. Let's install ECK on our cluster we do that as follows

Note: There is a 1.1 version as the latest BUT I installing a slightly older one here

$ kubectl apply -f https://download.elastic.co/downloads/eck/1.0.1/all-in-one.yaml

2. Make sure the operator is up and running as shown below
  
$ kubectl get all -n elastic-system
NAME READY STATUS RESTARTS AGE
pod/elastic-operator-0 1/1 Running 0 26d

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elastic-webhook-server ClusterIP 10.100.200.55 <none> 443/TCP 26d

NAME READY AGE
statefulset.apps/elastic-operator 1/1 26d

3. We can also see a CRD for Elasticsearch as shown below.

elasticsearches.elasticsearch.k8s.elastic.co
  
$ kubectl get crd
NAME CREATED AT
apmservers.apm.k8s.elastic.co 2020-06-17T00:37:32Z
clusterlogsinks.pksapi.io 2020-06-16T23:04:43Z
clustermetricsinks.pksapi.io 2020-06-16T23:04:44Z
elasticsearches.elasticsearch.k8s.elastic.co 2020-06-17T00:37:33Z
kibanas.kibana.k8s.elastic.co 2020-06-17T00:37:34Z
loadbalancers.vmware.com 2020-06-16T22:51:52Z
logsinks.pksapi.io 2020-06-16T23:04:43Z
metricsinks.pksapi.io 2020-06-16T23:04:44Z
nsxerrors.nsx.vmware.com 2020-06-16T22:51:52Z
nsxlbmonitors.vmware.com 2020-06-16T22:51:52Z
nsxlocks.nsx.vmware.com 2020-06-16T22:51:51Z

4. We are now ready to create our first Elasticsearch cluster. To do that create a file YML file as shown below

create-elastic-cluster-from-operator.yaml

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: 7.7.0
  http:
    service:
      spec:
        type: LoadBalancer # default is ClusterIP
    tls:
      selfSignedCertificate:
        disabled: true
  nodeSets:
  - name: default
    count: 2
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false

From the YML a few things to note:

  • We are creating two pods for our Elasticsearch cluster
  • We are using a K8s LoadBalancer to expose access to the cluster through HTTP
  • We are using version 7.7.0 but this is not the latest Elasticsearch version
  • We have disabled the use of TLS given this is just a demo
5. Apply that as shown below.

$ kubectl apply -f create-elastic-cluster-from-operator.yaml

6. After about a minute we should have our Elasticsearch cluster running. The following commands show that
  
$ kubectl get elasticsearch
NAME HEALTH NODES VERSION PHASE AGE
quickstart green 2 7.7.0 Ready 47h

$ kubectl get all -n default
NAME READY STATUS RESTARTS AGE
pod/quickstart-es-default-0 1/1 Running 0 47h
pod/quickstart-es-default-1 1/1 Running 0 47h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.200.1 <none> 443/TCP 27d
service/quickstart-es-default ClusterIP None <none> <none> 47h
service/quickstart-es-http LoadBalancer 10.100.200.92 10.195.93.137 9200:30590/TCP 47h

NAME READY AGE
statefulset.apps/quickstart-es-default 2/2 47h

7. Let's deploy a Kibana instance. To do that create a YML as shown below

create-kibana.yaml

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana-sample
spec:
  version: 7.7.0
  count: 1
  elasticsearchRef:
    name: quickstart
    namespace: default
  http:
    service:
      spec:
        type: LoadBalancer # default is ClusterIP

8. Apply that as shown below.

$ kubectl apply -f create-kibana.yaml

9. To verify everything is up and running we can run a command as follows
  
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/kibana-sample-kb-f8fcb88d5-jdzh5 1/1 Running 0 2d
pod/quickstart-es-default-0 1/1 Running 0 2d
pod/quickstart-es-default-1 1/1 Running 0 2d

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kibana-sample-kb-http LoadBalancer 10.100.200.46 10.195.93.174 5601:32459/TCP 2d
service/kubernetes ClusterIP 10.100.200.1 <none> 443/TCP 27d
service/quickstart-es-default ClusterIP None <none> <none> 2d
service/quickstart-es-http LoadBalancer 10.100.200.92 10.195.93.137 9200:30590/TCP 2d

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kibana-sample-kb 1/1 1 1 2d

NAME DESIRED CURRENT READY AGE
replicaset.apps/kibana-sample-kb-f8fcb88d5 1 1 1 2d

NAME READY AGE
statefulset.apps/quickstart-es-default 2/2 2d

10. So to access out cluster we will need to obtain the following which we can do using a script as follows. This was tested on Mac OSX

What do we need?

  • Elasticsearch password
  • IP address of the LoadBalancer service we created


access.sh

export PASSWORD=`kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'`
export IP=`kubectl get svc quickstart-es-http -o jsonpath='{.status.loadBalancer.ingress[0].ip}'`

echo ""
echo $IP
echo ""

curl -u "elastic:$PASSWORD" "http://$IP:9200"

echo ""

curl -u "elastic:$PASSWORD" "http://$IP:9200/_cat/health?v"

Output:

10.195.93.137

{
  "name" : "quickstart-es-default-1",
  "cluster_name" : "quickstart",
  "cluster_uuid" : "Bbpb7Pu7SmaQaCmEY2Er8g",
  "version" : {
    "number" : "7.7.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
    "build_date" : "2020-05-12T02:01:37.602180Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

.....

11. Ideally I would load some data into the Elasticsearch cluster BUT let's do that as part of a sample application using "Spring Data Elasticsearch". Clone the demo project as shown below.

$ git clone https://github.com/papicella/boot-elastic-demo.git
Cloning into 'boot-elastic-demo'...
remote: Enumerating objects: 36, done.
remote: Counting objects: 100% (36/36), done.
remote: Compressing objects: 100% (26/26), done.
remote: Total 36 (delta 1), reused 36 (delta 1), pack-reused 0
Unpacking objects: 100% (36/36), done.

12. Edit "./src/main/resources/application.yml" with your details for the Elasticsearch cluster above.

spring:
  elasticsearch:
    rest:
      username: elastic
      password: {PASSWORD}
      uris: http://{IP}:9200

13. Package as follows

$ ./mvnw -DskipTests package

14. Run as follows

$ ./mvnw spring-boot:run

....
2020-07-14 11:10:11.947  INFO 76260 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2020-07-14 11:10:11.954  INFO 76260 --- [           main] c.e.e.demo.BootElasticDemoApplication    : Started BootElasticDemoApplication in 2.495 seconds (JVM running for 2.778)
....

15. Access application using "http://localhost:8080/"




16. If we look at our code we will see the data was loaded into the Elasticsearch cluster using a java class called "LoadData.java". Ideally data should already exist in the cluster but for demo purposes we load some data as part of the Spring Boot Application and clear the data prior to each application run given it's just a demo.

2020-07-14 11:12:33.109  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='OjThSnMBLjyTRl7lZsDL', make='holden', model='commodore', bodystyles=[BodyStyle{type='2-door'}, BodyStyle{type='4-door'}, BodyStyle{type='5-door'}]}
2020-07-14 11:12:33.584  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='OzThSnMBLjyTRl7laMCo', make='holden', model='astra', bodystyles=[BodyStyle{type='2-door'}, BodyStyle{type='4-door'}]}
2020-07-14 11:12:34.189  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='PDThSnMBLjyTRl7lasCC', make='nissan', model='skyline', bodystyles=[BodyStyle{type='4-door'}]}
2020-07-14 11:12:34.744  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='PTThSnMBLjyTRl7lbMDe', make='nissan', model='pathfinder', bodystyles=[BodyStyle{type='5-door'}]}
2020-07-14 11:12:35.227  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='PjThSnMBLjyTRl7lb8AL', make='ford', model='falcon', bodystyles=[BodyStyle{type='4-door'}, BodyStyle{type='5-door'}]}
2020-07-14 11:12:36.737  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='QDThSnMBLjyTRl7lcMDu', make='ford', model='territory', bodystyles=[BodyStyle{type='5-door'}]}
2020-07-14 11:12:37.266  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='QTThSnMBLjyTRl7ldsDU', make='toyota', model='camry', bodystyles=[BodyStyle{type='4-door'}, BodyStyle{type='5-door'}]}
2020-07-14 11:12:37.777  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='QjThSnMBLjyTRl7leMDk', make='toyota', model='corolla', bodystyles=[BodyStyle{type='2-door'}, BodyStyle{type='5-door'}]}
2020-07-14 11:12:38.285  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='QzThSnMBLjyTRl7lesDj', make='kia', model='sorento', bodystyles=[BodyStyle{type='5-door'}]}
2020-07-14 11:12:38.800  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='RDThSnMBLjyTRl7lfMDg', make='kia', model='sportage', bodystyles=[BodyStyle{type='4-door'}]}

LoadData.java
  
package com.example.elastic.demo;

import com.example.elastic.demo.indices.BodyStyle;
import com.example.elastic.demo.indices.Car;
import com.example.elastic.demo.repo.CarRepository;
import org.springframework.boot.CommandLineRunner;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import lombok.extern.slf4j.Slf4j;

import static java.util.Arrays.asList;

@Configuration
@Slf4j
public class LoadData {
@Bean
public CommandLineRunner initElasticsearchData(CarRepository carRepository) {
return args -> {
carRepository.deleteAll();
log.info("Pre loading " + carRepository.save(new Car("holden", "commodore", asList(new BodyStyle("2-door"), new BodyStyle("4-door"), new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("holden", "astra", asList(new BodyStyle("2-door"), new BodyStyle("4-door")))));
log.info("Pre loading " + carRepository.save(new Car("nissan", "skyline", asList(new BodyStyle("4-door")))));
log.info("Pre loading " + carRepository.save(new Car("nissan", "pathfinder", asList(new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("ford", "falcon", asList(new BodyStyle("4-door"), new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("ford", "territory", asList(new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("toyota", "camry", asList(new BodyStyle("4-door"), new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("toyota", "corolla", asList(new BodyStyle("2-door"), new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("kia", "sorento", asList(new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("kia", "sportage", asList(new BodyStyle("4-door")))));
};
}
}

17. Our CarRepository interface is defined as follows

CarRepository.java
  
package com.example.elastic.demo.repo;

import com.example.elastic.demo.indices.Car;
import org.springframework.data.domain.Page;
import org.springframework.data.domain.Pageable;
import org.springframework.data.elasticsearch.repository.ElasticsearchRepository;
import org.springframework.stereotype.Repository;

@Repository
public interface CarRepository extends ElasticsearchRepository <Car, String> {

Page<Car> findByMakeContaining(String make, Pageable page);

}

18. So let's also via this data using "curl" and Kibana as shown below.

curl -X GET -u "elastic:{PASSWORD}" "http://{IP}:9200/vehicle/_search?pretty" -H 'Content-Type: application/json' -d'
{
  "query": { "match_all": {} },
  "sort": [
    { "_id": "asc" }
  ]
}
'

Output:

{
  "took" : 2,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 10,
      "relation" : "eq"
    },
    "max_score" : null,
    "hits" : [
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "OjThSnMBLjyTRl7lZsDL",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "holden",
          "model" : "commodore",
          "bodystyles" : [
            {
              "type" : "2-door"
            },
            {
              "type" : "4-door"
            },
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "OjThSnMBLjyTRl7lZsDL"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "OzThSnMBLjyTRl7laMCo",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "holden",
          "model" : "astra",
          "bodystyles" : [
            {
              "type" : "2-door"
            },
            {
              "type" : "4-door"
            }
          ]
        },
        "sort" : [
          "OzThSnMBLjyTRl7laMCo"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "PDThSnMBLjyTRl7lasCC",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "nissan",
          "model" : "skyline",
          "bodystyles" : [
            {
              "type" : "4-door"
            }
          ]
        },
        "sort" : [
          "PDThSnMBLjyTRl7lasCC"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "PTThSnMBLjyTRl7lbMDe",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "nissan",
          "model" : "pathfinder",
          "bodystyles" : [
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "PTThSnMBLjyTRl7lbMDe"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "PjThSnMBLjyTRl7lb8AL",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "ford",
          "model" : "falcon",
          "bodystyles" : [
            {
              "type" : "4-door"
            },
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "PjThSnMBLjyTRl7lb8AL"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "QDThSnMBLjyTRl7lcMDu",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "ford",
          "model" : "territory",
          "bodystyles" : [
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "QDThSnMBLjyTRl7lcMDu"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "QTThSnMBLjyTRl7ldsDU",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "toyota",
          "model" : "camry",
          "bodystyles" : [
            {
              "type" : "4-door"
            },
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "QTThSnMBLjyTRl7ldsDU"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "QjThSnMBLjyTRl7leMDk",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "toyota",
          "model" : "corolla",
          "bodystyles" : [
            {
              "type" : "2-door"
            },
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "QjThSnMBLjyTRl7leMDk"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "QzThSnMBLjyTRl7lesDj",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "kia",
          "model" : "sorento",
          "bodystyles" : [
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "QzThSnMBLjyTRl7lesDj"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "RDThSnMBLjyTRl7lfMDg",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "kia",
          "model" : "sportage",
          "bodystyles" : [
            {
              "type" : "4-door"
            }
          ]
        },
        "sort" : [
          "RDThSnMBLjyTRl7lfMDg"
        ]
      }
    ]
  }
}

Kibana

Obtain Kibana HTTP IP as shown below and login using username "elastic" and password we obtained previously.

$ kubectl get svc kibana-sample-kb-http -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
10.195.93.174




Finally maybe you want to deploy the application to Kubernetes. To do that take a look at Cloud Native Buildpacks CNCF project and/or Tanzu Build Service to turn your code into a Container Image stored in a registry.



More Information

Spring Data Elasticsearch
https://spring.io/projects/spring-data-elasticsearch

VMware Tanzu Kubernetes Grid Integrated Edition Documentation
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid-Integrated-Edition/index.html
Categories: Fusion Middleware

Impact on importing oracle predefined java libraries

Tom Kyte - Mon, 2020-07-13 18:26
Hi Tom, I am trying to import below predefined java libraries to identify the language of the given text. Is there any impact on the data base by importing these libraries like such as slowness or any application connecting to database. loadjava -u DB_USER orai18n.jar orai18n-collation.jar orai18n-lcsd.jar orai18n-mapping.jar orai18n-net.jar orai18n-servlet.jar orai18n-tools.jar orai18n-translation.jar orai18n-utility.jar
Categories: DBA Blogs

DB upgrade guidelines/ practices in enterprise applications

Tom Kyte - Mon, 2020-07-13 18:26
Dear Team, As in any enterprise application, we have many in-house developed Microsoft.NET applications that connect to a central DB, Oracle 10G in this case. We have many web-services based on REST/ SOAP, Microservices, Oracle Reports, small tools and utilities, batch-jobs, web-applicatons (IIS hosted) and one big Windows based application in this environment. We plan to upgrade Oracle to 18C in such an environment. The applications need to be upgraded with Oracle client code so that they can connect to the newer version of Oracle. We are not planning any changes in schema for the upgrade. Our plan is to upgrade all the dependent applications in lower environment- update the oracle client code to handle new version. Once all the applications and oracle reports are migrated in development environment, we will start moving to higher environments such as UAT, INT, PERF, Pre-Prod and Prod. In non-prod environments (other than Prod and non-Prod), we can directly switch from one version of Oracle to another because we have multiple instances such as UAT1, UAT2, INT1, INT2, etc. However, doing so in Pre-Prod and Prod is neither possible nor advisable. We may need to do a parallel run before we switch off the old DB. In this regard, can you suggest good practices that we should follow for upgrading the applications in lower environment and also what strategy is better suited for prod environment. Thanks. Regards, Sumit
Categories: DBA Blogs

Sultanate of Oman Launches Major National Project to Strengthen Digital Economy

Oracle Press Releases - Mon, 2020-07-13 12:56
Press Release
Sultanate of Oman Launches Major National Project to Strengthen Digital Economy Selects Oracle Dedicated Region Cloud@Customer to Realize Country’s Vision for 2040

Redwood Shores Calif—Jul 13, 2020

Today Oracle announced that the Government of the Sultanate of Oman has selected Oracle Dedicated Region Cloud@Customer to support the Government’s national initiative to accelerate digital transformation across more than 120 government and semi-government entities with an integrated Government Cloud (G – Cloud) platform. The initiative is led by the Oman Information and Communications Technology Group (OICTG), a new entity established by the Oman Investment Authority (OIA), owned by the Government of the Sultanate of Oman. Oracle Dedicated Region Cloud@Customer provides the Government with a fully managed cloud region that brings all of Oracle’s second-generation cloud services, including Autonomous Database, to the government’s datacenters.

“At OICTG, our aim is to design digital initiatives to strengthen the country’s digital economy in line with Oman’s Vision 2040 framework,” said His Highness Al Sayyid Kamil bin Fahad Al Said, Chairman, OICTG. “We are working toward creating an integrated cloud-ready modern IT infrastructure that provides seamless scalability, agility, and security with a compelling cost of ownership.”

“Cooperation with Oracle demonstrates OICTG’s focus on innovation and 4IR trends: migrating on-premises Oracle ERP and CRM applications to a cloud environment, developing customised cloud native applications, and exploring the implementation of emerging technologies like Artificial Intelligence, Blockchain, Big Data and high-performance computing (HPC), while following Oman’s data sovereignty regulations,” said Al Mandhari, Chief Executive Officer, OICTG. “Oracle Dedication Region Cloud@Customer in Oman will help us explore several dynamic use cases for Omani Government entities.”

Oracle Dedicated Region Cloud@Customer enables customers, like the government of Oman, to run an entire IT portfolio on a cloud infrastructure with physical control of infrastructure and data, to help meet the most demanding data sovereignty requirements. The fully managed cloud region provides strong isolation of customer data, including all API operations, which remain local to the government’s datacenters and provide the highest levels of security. Oracle Dedicated Region Cloud@Customer provides significant cost benefits to more than 120 Omani Government entities under the Oman G – Cloud initiative. Omani Government entities will be able to drive agility by quickly bringing new applications and services online in efficient and cost-effective manner compared to an on-premises platform.

“Making cloud innovation more accessible to governments and enterprises with superior security, infrastructure, performance, reliability, and data management is a key priority for Oracle,” said Abdul Rahman Al Thehaiban, Senior Vice President, Technology, MEA & CEE, Oracle. “The Sultanate of Oman’s ‘Cloud First’ national strategy to drive socio economic success for the country is inspiring. Government entities in Oman will now be able to quickly respond to changing economic conditions by rapidly scaling and introducing secure innovative services.”

Contact Info
Nicole Maloney
Oracle PR
+1.650.506.0806
nicole.maloney@oracle.com
Katie Barron
Oracle PR
+1.202.904.1138
katie.barron@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Katie Barron

  • +1.202.904.1138

Min/Max costing

Jonathan Lewis - Mon, 2020-07-13 07:07

A question about the min/max index scan appeared on the Oracle Developer Community forum recently. The query supplied in the thread was a little odd – you might ask why anyone would run it as it stands – and I’ve modified it to make it even stranger to demonstrate a range of details.

I’ll start with a simple data set, not bothering to collect stats because that will be done automatically on create for my versions:

rem
rem     Script:         min_max_cost_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jul 2020
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table t1 
as 
select  * 
from    all_objects 
where   rownum <= 50000 -- > comment to avoid wordpress format issue
;

create index t1_i1 on t1(object_name);

Now a few simple queries – for which I’ll capture and display the in-memory execution plans a little further on:


set linesize 156
set pagesize 60
set trimspool on
set serveroutput off
alter session set statistics_level = all;

prompt  =====================
prompt  Baseline select max()
prompt  =====================

select max(object_name) from t1;
select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

prompt  ============================
prompt  select max() with dummy join
prompt  ============================

select max(object_name) from t1, dual where dummy is not null;
select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

prompt  =============================================
prompt  select max() with dummy join and index() hint
prompt  =============================================

select /*+ index(t1) */  max(object_name) from t1, dual where dummy is not null;
select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

prompt  ============================================
prompt  select max() with dummy join and inline view
prompt  ============================================

select  obj
from    (
        select  max(object_name)  obj
        from    t1
        ),
        dual 
where   dummy is not null
/

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last alias'));

prompt  ====================================
prompt  select max() with existence subquery
prompt  ====================================

select max(object_name) from t1 where exists (select null from dual where dummy is not null);
select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last alias'));

prompt  ============================================
prompt  select max() with failing existence subquery
prompt  ============================================

select max(object_name) from t1 where exists (select null from dual where dummy is null);
select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last alias'));

With 50,000 rows and the appropriate index to allow Oracle to find the maximum value very quickly we expect the optimizer to invoke the “index full scan (min/max)” operation, visiting only the extreme leaf block of the index – and, indeed, we are not disappointed, that’s exactly what the baseline query shows us:

=====================
Baseline select max()
=====================
-----------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |       |      1 |        |     3 (100)|      1 |00:00:00.01 |       3 |
|   1 |  SORT AGGREGATE            |       |      1 |      1 |            |      1 |00:00:00.01 |       3 |
|   2 |   INDEX FULL SCAN (MIN/MAX)| T1_I1 |      1 |      1 |     3   (0)|      1 |00:00:00.01 |       3 |
-----------------------------------------------------------------------------------------------------------

However, when we introduce the (as yet unjustified) join to dual something very different happens – the optimizer forgets all about the min/max optimisation and does an index fast full scan of the t1_i1 index, passing all 50,000 rows up to the parent operation.


============================
select max() with dummy join
============================
-------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |       |      1 |        |    50 (100)|      1 |00:00:00.02 |     360 |
|   1 |  SORT AGGREGATE        |       |      1 |      1 |            |      1 |00:00:00.02 |     360 |
|   2 |   NESTED LOOPS         |       |      1 |  50000 |    50   (6)|  50000 |00:00:00.01 |     360 |
|*  3 |    TABLE ACCESS FULL   | DUAL  |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       3 |
|   4 |    INDEX FAST FULL SCAN| T1_I1 |      1 |  50000 |    48   (7)|  50000 |00:00:00.01 |     357 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("DUMMY" IS NOT NULL)

We could, of course, try hinting an index range (full) scan to see what happens – and the result is even more surprising: Oracle takes the hint, uses the min/max optimisation, and shows us that it didn’t take that path by default because it had “forgotten” how to cost it correctly.

Note the cost of 354 at operation 5 when the original min/max cost was 3, note also that the optimizer thinks we have to visit all 50,000 index entries even though, at run-time, Oracle correctly uses a path that visits only one index entry:


=============================================
select max() with dummy join and index() hint
=============================================
-------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |      1 |        |   356 (100)|      1 |00:00:00.01 |       6 |
|   1 |  SORT AGGREGATE              |       |      1 |      1 |            |      1 |00:00:00.01 |       6 |
|   2 |   NESTED LOOPS               |       |      1 |  50000 |   356   (2)|      1 |00:00:00.01 |       6 |
|*  3 |    TABLE ACCESS FULL         | DUAL  |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       3 |
|   4 |    FIRST ROW                 |       |      1 |  50000 |   354   (2)|      1 |00:00:00.01 |       3 |
|   5 |     INDEX FULL SCAN (MIN/MAX)| T1_I1 |      1 |  50000 |   354   (2)|      1 |00:00:00.01 |       3 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("DUMMY" IS NOT NULL)

Of course we could recognise that the t1 access and the access to dual could be de-coupled – and hope that the optimizer doesn’t try to use complex view merging (maybe we should have included a /*+ no_merge */ hint) to fall back to a simple join. Fortunately the optimizer doesn’t try merging the two query blocks, so it optimises the max(object_name) query block correctly, giving us the benefit of the min/max optimisation. I’ve included the ‘alias’ format option in this call to dbms_xplan() so that we can see the two query blocks that are optimised separately.


============================================
select max() with dummy join and inline view
============================================

-------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |      1 |        |     5 (100)|      1 |00:00:00.01 |       6 |
|   1 |  NESTED LOOPS                |       |      1 |      1 |     5   (0)|      1 |00:00:00.01 |       6 |
|*  2 |   TABLE ACCESS FULL          | DUAL  |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       3 |
|   3 |   VIEW                       |       |      1 |      1 |     3   (0)|      1 |00:00:00.01 |       3 |
|   4 |    SORT AGGREGATE            |       |      1 |      1 |            |      1 |00:00:00.01 |       3 |
|   5 |     INDEX FULL SCAN (MIN/MAX)| T1_I1 |      1 |      1 |     3   (0)|      1 |00:00:00.01 |       3 |
-------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
   2 - SEL$1 / DUAL@SEL$1
   3 - SEL$2 / from$_subquery$_001@SEL$1
   4 - SEL$2
   5 - SEL$2 / T1@SEL$2

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("DUMMY" IS NOT NULL)

There is a maxim (or guideline, or rule of thumb) that if the from clause of a query includes tables that don’t get referenced in the select list then those tables should (probably) appear in subqueries. Of course this guideline sometimes turns out to be a very bad idea, and sometimes it just means the optimizer unnests the subqueries and recreates the joins we started with, but let’s try the approach with this query. I’ve included the ‘alias’ option again so that you can see that this plan is optimised as two query blocks, allowing the max(object_name) query block to find the min/max strategy.


====================================
select max() with existence subquery
====================================
-------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |      1 |        |     5 (100)|      1 |00:00:00.01 |       6 |
|   1 |  SORT AGGREGATE              |       |      1 |      1 |            |      1 |00:00:00.01 |       6 |
|*  2 |   FILTER                     |       |      1 |        |            |      1 |00:00:00.01 |       6 |
|   3 |    FIRST ROW                 |       |      1 |      1 |     3   (0)|      1 |00:00:00.01 |       3 |
|   4 |     INDEX FULL SCAN (MIN/MAX)| T1_I1 |      1 |      1 |     3   (0)|      1 |00:00:00.01 |       3 |
|*  5 |    TABLE ACCESS FULL         | DUAL  |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       3 | 
-------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
   4 - SEL$1 / T1@SEL$1
   5 - SEL$2 / DUAL@SEL$2

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter( IS NOT NULL)
   5 - filter("DUMMY" IS NOT NULL)

There’s a very important detail in the execution plan above. At first sight it looks like the optimizer has a plan using a simple filter subquery operation – which means you might be fooled into reading it as “for each row returned by operation 3 call operation 5”. This is not the case.

Because the subquery is not a correlated subquery – it’s an example that I sometimes call a “fixed” or (slightly ambiguously) “constant” subquery – Oracle can execute it once and use the resulting rowsource to decide whether or not to call the main query. It’s a case where (if you didn’t realise the plan consisted of two separate query blocks) you would say that Oracle was calling the second child first.

To prove this point I’ve set up one last variation of the query – the “failed subquery” version – where my select from dual returns no rows. Check the numbers of Starts shown for each line of the plan:


============================================
select max() with failing existence subquery
============================================
-------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |      1 |        |     5 (100)|      1 |00:00:00.01 |       3 |
|   1 |  SORT AGGREGATE              |       |      1 |      1 |            |      1 |00:00:00.01 |       3 |
|*  2 |   FILTER                     |       |      1 |        |            |      0 |00:00:00.01 |       3 |
|   3 |    FIRST ROW                 |       |      0 |      1 |     3   (0)|      0 |00:00:00.01 |       0 |
|   4 |     INDEX FULL SCAN (MIN/MAX)| T1_I1 |      0 |      1 |     3   (0)|      0 |00:00:00.01 |       0 |
|*  5 |    TABLE ACCESS FULL         | DUAL  |      1 |      1 |     2   (0)|      0 |00:00:00.01 |       3 |
-------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
   4 - SEL$1 / T1@SEL$1
   5 - SEL$2 / DUAL@SEL$2

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter( IS NOT NULL)
   5 - filter("DUMMY" IS NULL)

The filter at operation 3 calls operation 5 – the query against dual – which runs once returning no rows. The min/max scan of t1_i1 at operation 4 doesn’t run. Operation 5 was called before operation 4 was considered.

Finally

This brings us back to the question – why would anyone run a strange query like this.

Perhaps the answer is that it’s just a demonstration of one part of a more complex query and what we’re trying to do is say: “if a certain record exists in a control table then include some information from table X”.

This note tells us that if there’s a possibility of a min/max optimisation to find the data then we should avoid using a join, instead we should use a “fixed subquery” to check the control table, and maybe we’ll also have to write the part of our query that collects (or isn’t required to collect) the interesting bit of data as an inline view.

 

RAC cluster_interconnect config

Tom Kyte - Mon, 2020-07-13 00:26
Hi Tom, Is it ok to use Fiber and Ethernet for cluster_interconnect type mixed? Ex. ib0 192.168.3.0 global cluster_interconnect,asm ==> InfiniBand Fiber Interface eno4 192.168.5.0 global cluster_interconnect,asm ==> Fast Ethernet Interface Any potential risk on that kind of configuration. Thank you. Davis
Categories: DBA Blogs

CLOB

Tom Kyte - Mon, 2020-07-13 00:26
Hi Tom, I have a table in which one column is CLOB datatype. Data in this column are 100 lines. ( single record has 100 lines) Each line has been separated by chr(13) ( enter key's ascii value). My requirement is If i search for a specific word, this word matches with First record's - 3rd line fifth record's - 50th line 100 th record's - 1 st line . Now, i need to show only the lines. not the entire record. means that my output will be 3 rows.But I should not show all three rows. instead 3 rows with the 3 lines only. That is lines which have the keyword. How to achive this? Thank you
Categories: DBA Blogs

Massive Deletes

Jonathan Lewis - Sun, 2020-07-12 13:36

One of the recurrent questions on the Oracle Developer Commuity forum is:

What’s the best way to delete millions of rows from a table?

There are an enormous number of relevant details that you need to know before you can give the “right” answer to this question, e.g.

  • Which version of Oracle
  • Standard or Enterprise Edition
  • Is “millions” a tiny percentage of the table or a large percentage
  • Are there any referential integrity constraints in place
  • Does the system have to keep running while the deletion takes place
  • Is the table compressed
  • How many indexes are there – and can you drop some of them
  • How much space do you have to do this job
  • How much time do you have to do this job

One of the most important ones, of course, is “Which version of Oracle?” because it can make an enormous difference to the range of possible strategies. I’m writing this particular note because the question came up a little while ago where the user wanted to delete all the data from 2008 through to the end of 2018, keeping only the last 18 months of data.

That sounds like the volume of data to be deleted (11 years) is very much larger than the volume of data to be kept (1.5 years) – but we can’t be sure of that since businesses tend to grow over time so that last 18 months of data might actually be just as big as the previous 11 years.

As usually happens in response to this question there were suggestions to “create a new table selecting the data you want to keep”, “use dbms_parallel_execute to delete by rowid ranges in parallel”, and a relatively new one “convert to a partitioned table so that the data you want to keep is in its own partition and drop the other partition(s)”. 

I wrote a note a few years ago giving an example of converting a simple heap table to a partitioned table – while eliminating the data you don’t want to keep so there’s no need to waste resources copying redundant data, maintaining indexes (choosing between local and global) and doing the whole job online.  So, after learning that the OP was running 12.2 Enterprise Edition with the Partitioning option, I suggested that (s)he convert the table into a hash partitioned table with a single partition as this should (for purposes of optimisation) behave just like a simple heap table using the “including rows” clause to copy only the last 18 months of data.

I pointed out that their version of Oracle(EE + PO) gave them the 2nd best option – because I knew that in 19c you could simply do something like:

rem
rem     Script:         122_move.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jul 2020
rem     Purpose:        
rem
rem     Last tested 
rem             12.2.0.1
rem

create table t1
as
select  *
from    all_objects
where   rownum <= 50000
;

alter table t1 add constraint t1_pk primary key(object_id);

alter table t1 move
        including rows where owner != 'SYS'
        online
;

It wasn’t until a little later that a tiny nagging doubt crept into my mind that maybe this enhancement to the basic move command may have appeared at the same time as the modify partition enhancement – in other words in 12.2.0.1; so I ran the test above and found that it did actually seem to work. (And I haven’t yet found any bugs on MOS suggesting that it shouldn’t be used.)

Having discovered that the command was available I thought that I’d better check whether it was also documented, and found that it was in the 12.2 SQL Reference Manual (though not the 12.1 reference manual – for the obvious reason) under Alter Table. Page down to the “tram-tracks” for the Alter Table command and follow the link for the “move_table_clause”, and from there follow the link for “filter_condition”.

Note:

This option is not available on 12.1, and if you run the test Oracle will raise error “ORA-25184: column name expected” at the point where the word “rows” appears. This may look somewhat counter-intuitive, but in that version of Oracle a command like “alter table TabX move including ColY online”; is how you would rebuild an index organized table (IOT) with all columns up to ColY in the “IOT_TOP” segment.

 

A Function Based Index : 2

Hemant K Chitale - Sun, 2020-07-12 10:16
Here's another Function Based Index example.   This is in 19.3  (not to demonstrate any difference between  12.2 and 19.3 here)

SQL> create table employees
2 (employee_id number primary key,
3 first_name varchar2(30),
4 last_name varchar2(30))
5 pctfree 0
6 /

Table created.

SQL>SQL> insert into employees
...
...

10000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','EMPLOYEES');

PL/SQL procedure successfully completed.

SQL>
SQL> select sample_size, blocks, empty_blocks, avg_row_len
2 from user_tab_statistics
3 where table_name = 'EMPLOYEES'
4 /

SAMPLE_SIZE BLOCKS EMPTY_BLOCKS AVG_ROW_LEN
----------- ---------- ------------ -----------
10000 65 0 45

SQL>
SQL> select 45*10000/8192 from dual;

45*10000/8192
-------------
54.9316406

SQL>
SQL> select column_name, avg_col_len
2 from user_tab_cols
3 where table_name = 'EMPLOYEES'
4 order by column_id
5 /

COLUMN_NAME AVG_COL_LEN
------------------------------ -----------
EMPLOYEE_ID 4
FIRST_NAME 19
LAST_NAME 22

SQL>
SQL> create index employees_full_name_ndx
2 on employees(first_name||last_name)
3 /

Index created.

SQL>
SQL> select column_name, avg_col_len
2 from user_tab_cols
3 where table_name = 'EMPLOYEES'
4 order by column_id
5 /

COLUMN_NAME AVG_COL_LEN
------------------------------ -----------
EMPLOYEE_ID 4
FIRST_NAME 19
LAST_NAME 22
SYS_NC00004$

SQL>
SQL> -- no statistics on the new virtual column yet
SQL> exec dbms_stats.gather_table_stats('','EMPLOYEES');

PL/SQL procedure successfully completed.

SQL> select column_name, avg_col_len
2 from user_tab_cols
3 where table_name = 'EMPLOYEES'
4 order by column_id
5 /

COLUMN_NAME AVG_COL_LEN
------------------------------ -----------
EMPLOYEE_ID 4
FIRST_NAME 19
LAST_NAME 22
SYS_NC00004$ 40

SQL>
SQL> select num_rows, sample_size, blocks, empty_blocks, avg_row_len
2 from user_tab_statistics
3 where table_name = 'EMPLOYEES'
4 /

NUM_ROWS SAMPLE_SIZE BLOCKS EMPTY_BLOCKS AVG_ROW_LEN
---------- ----------- ---------- ------------ -----------
10000 10000 65 0 85

SQL>
SQL> select 85*10000/8192 from dual;

85*10000/8192
-------------
103.759766

SQL>
SQL> select leaf_blocks
2 from user_ind_statistics
3 where index_name = 'EMPLOYEES_FULL_NAME_NDX'
4 /

LEAF_BLOCKS
-----------
72

SQL>



After creation of the Function Based Index, the new virtual column in the table has an average length of 40 bytes.  Surely, the table should now have taken up more blocks with the average row length increasing from 45 to 85 ?  The number of table blocks should have increased by about 50 or more blocks ? And, yet, the table still consumes only 65 blocks.  It is the Index that reports 72 leaf blocks.

So, adding a Function Based Index increases the reported "AVG_ROW_LEN" for the Table without actually increasing the space consumed by the table.  The computed values are not stored in the Table blocks but are actually only in the Index leaf blocks.

Conversely, this can also mean that when you come across a table with a low BLOCKS versus AVG_ROW_LEN*NUM_ROWS, you might want to look for a Function Based Index :


SQL> select index_name, index_type, uniqueness
2 from user_indexes
3 where table_name = 'EMPLOYEES'
4 /

INDEX_NAME INDEX_TYPE UNIQUENES
------------------------------ --------------------------- ---------
SYS_C007674 NORMAL UNIQUE
EMPLOYEES_FULL_NAME_NDX FUNCTION-BASED NORMAL NONUNIQUE

SQL> select column_position, column_expression
2 from user_ind_expressions
3 where table_name = 'EMPLOYEES'
4 and index_name = 'EMPLOYEES_FULL_NAME_NDX'
5 /

COLUMN_POSITION COLUMN_EXPRESSION
--------------- ------------------------------
1 "FIRST_NAME"||"LAST_NAME"

SQL>


(SYS_C007674 is the Primary Key Index on EMPLOYEE_ID)


Categories: DBA Blogs

sql trace and tkprof

Tom Kyte - Sat, 2020-07-11 11:46
hi tom, i have seen lot of answers from you regarding sql trace and tkprof for sql performance tuning.but i am still not clear with few things. i hope you will explain me clearly those things 1.how does this sql trace and tkprof differ from set autotrace on. 2.if we use set autotrace on and explain plan, how do we analyze a particular query and tell whether the query is tuned or not 3.for sql trace and tkprof i want to know if if i do 'ALTER SESSION SET SQL_TRACE = TRUE' where does the trace file get created(my oracle server is running on unix). by formatting the trace file using tkprof how do i analyze it to tell the query is tuned or not? 4.is the order of tables in a query matters in performance? for this question i got an answer that it matters for RBO but not for CBO.but who decides to go for RBO,CBO.can we tell oracle to go for CBO OR RBO.If we can decide, when to go for RBO,when to go for CBO. sorry for the flow of questions, but i am confused on the above issues. please answer the above. thanks a lot. nk
Categories: DBA Blogs

Rename ASM DiskGroup 19c

Michael Dinh - Sat, 2020-07-11 11:17
IMPORTANT: NO DB has been created !!!

Why is rename ASM DG?
1. Test RAC RMAN duplicate for standby similar to client environment.
2. Just because.

Rename Diskgroup name by renamedg command doesn't update the OCR (Doc ID 2392499.1)	

Bug 8740909  [11GR2-LNX-090726]OLD DISK GROUP RESOURCE NOT DELETED AFTER RENAMEDG

renamedg does not update resources, nor does renamedg update any file references within the database. 
Because of this, the original disk group resource is not automatically deleted. 
This resource can be manually deleted with Server Control Utility (SRVCTL).

==============================

[oracle@ol7-19-phx2 ~]$ asmcmd -V
asmcmd version 19.6.0.0.0
[oracle@ol7-19-phx2 ~]$

==============================

[oracle@ol7-19-phx1 ~]$ crsctl stat res -t -w "TYPE = ora.diskgroup.type"
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.CRS.dg(ora.asmgroup)
      1        ONLINE  ONLINE       ol7-19-phx1              STABLE
      2        ONLINE  ONLINE       ol7-19-phx2              STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DATA_PHX.dg(ora.asmgroup)
      1        ONLINE  ONLINE       ol7-19-phx1              STABLE
      2        ONLINE  ONLINE       ol7-19-phx2              STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.RECO.dg(ora.asmgroup)
      1        ONLINE  ONLINE       ol7-19-phx1              STABLE
      2        ONLINE  ONLINE       ol7-19-phx2              STABLE
      3        OFFLINE OFFLINE                               STABLE
--------------------------------------------------------------------------------
[oracle@ol7-19-phx1 ~]$


==============================

[oracle@ol7-19-phx1 ~]$ asmcmd umount RECO
[oracle@ol7-19-phx2 ~]$ asmcmd umount RECO

==============================

[oracle@ol7-19-phx1 ~]$ asmcmd lsdg -g
Inst_ID  State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
      1  MOUNTED  NORMAL  N         512             512   4096  1048576     98301    47769            32767            7501              0             Y  CRS/
      2  MOUNTED  NORMAL  N         512             512   4096  1048576     98301    47769            32767            7501              0             Y  CRS/
      1  MOUNTED  EXTERN  N         512             512   4096  1048576     32767    32663                0           32663              0             N  DATA_PHX/
      2  MOUNTED  EXTERN  N         512             512   4096  1048576     32767    32663                0           32663              0             N  DATA_PHX/

[oracle@ol7-19-phx1 ~]$ crsctl stat res -t -w "TYPE = ora.diskgroup.type"
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.CRS.dg(ora.asmgroup)
      1        ONLINE  ONLINE       ol7-19-phx1              STABLE
      2        ONLINE  ONLINE       ol7-19-phx2              STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DATA_PHX.dg(ora.asmgroup)
      1        ONLINE  ONLINE       ol7-19-phx1              STABLE
      2        ONLINE  ONLINE       ol7-19-phx2              STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.RECO.dg(ora.asmgroup)
      1        OFFLINE OFFLINE                               STABLE
      2        OFFLINE OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
--------------------------------------------------------------------------------
[oracle@ol7-19-phx1 ~]$

==============================

[oracle@ol7-19-phx1 ~]$ renamedg dgname=RECO newdgname=FRA_PHX verbose=true

Parameters in effect:

         Old DG name       : RECO
         New DG name          : FRA_PHX
         Phases               :
                 Phase 1
                 Phase 2
         Discovery str        : (null)
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: dgname=RECO newdgname=FRA_PHX verbose=true
Executing phase 1
Discovering the group
Performing discovery with string:
ERROR: submiting READ on /dev/sdg failed due to kgfknm error
ERROR: -20(27041 /dev/sdg)ERROR: submiting READ on /dev/sda failed due to kgfknm error
ERROR: -20(27041 /dev/sda)ERROR: submiting READ on /dev/sdc failed due to kgfknm error
ERROR: -20(27041 /dev/sdc)ERROR: submiting READ on /dev/sdf failed due to kgfknm error
ERROR: -20(27041 /dev/sdf)ERROR: submiting READ on /dev/sde failed due to kgfknm error
ERROR: -20(27041 /dev/sde)ERROR: submiting READ on /dev/sdd failed due to kgfknm error
ERROR: -20(27041 /dev/sdd)ERROR: submiting READ on /dev/sdb failed due to kgfknm error
ERROR: -20(27041 /dev/sdb)ERROR: submiting READ on /dev/sda1 failed due to kgfknm error
ERROR: -20(27041 /dev/sda1)ERROR: submiting READ on /dev/sda2 failed due to kgfknm error
ERROR: -20(27041 /dev/sda2)ERROR: submiting READ on /dev/sdb1 failed due to kgfknm error
ERROR: -20(27041 /dev/sdb1)Identified disk UFS:/dev/sdg1 with disk number:0 and timestamp (33103213 1600008192)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:
ERROR: submiting READ on /dev/sdg failed due to kgfknm error
ERROR: -20(27041 /dev/sdg)ERROR: submiting READ on /dev/sda failed due to kgfknm error
ERROR: -20(27041 /dev/sda)ERROR: submiting READ on /dev/sdc failed due to kgfknm error
ERROR: -20(27041 /dev/sdc)ERROR: submiting READ on /dev/sdf failed due to kgfknm error
ERROR: -20(27041 /dev/sdf)ERROR: submiting READ on /dev/sde failed due to kgfknm error
ERROR: -20(27041 /dev/sde)ERROR: submiting READ on /dev/sdd failed due to kgfknm error
ERROR: -20(27041 /dev/sdd)ERROR: submiting READ on /dev/sdb failed due to kgfknm error
ERROR: -20(27041 /dev/sdb)ERROR: submiting READ on /dev/sda1 failed due to kgfknm error
ERROR: -20(27041 /dev/sda1)ERROR: submiting READ on /dev/sda2 failed due to kgfknm error
ERROR: -20(27041 /dev/sda2)ERROR: submiting READ on /dev/sdb1 failed due to kgfknm error
ERROR: -20(27041 /dev/sdb1)Identified disk UFS:/dev/sdg1 with disk number:0 and timestamp (33103213 1600008192)
Checking if the diskgroup is mounted or used by CSS
Checking disk number:0
Generating configuration file..
Completed phase 1
Executing phase 2
Looking for /dev/sdg1
Modifying the header
Completed phase 2
[oracle@ol7-19-phx1 ~]$

==============================

[oracle@ol7-19-phx1 ~]$ asmcmd mount FRA_PHX
[oracle@ol7-19-phx2 ~]$ asmcmd mount FRA_PHX

==============================

[oracle@ol7-19-phx1 ~]$ asmcmd lsdg FRA_PHX -g
Inst_ID  State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
      1  MOUNTED  EXTERN  N         512             512   4096  1048576     32767    32663                0           32663              0             N  FRA_PHX/
      2  MOUNTED  EXTERN  N         512             512   4096  1048576     32767    32663                0           32663              0             N  FRA_PHX/

[oracle@ol7-19-phx1 ~]$ crsctl stat res -t -w "TYPE = ora.diskgroup.type"
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.CRS.dg(ora.asmgroup)
      1        ONLINE  ONLINE       ol7-19-phx1              STABLE
      2        ONLINE  ONLINE       ol7-19-phx2              STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DATA_PHX.dg(ora.asmgroup)
      1        ONLINE  ONLINE       ol7-19-phx1              STABLE
      2        ONLINE  ONLINE       ol7-19-phx2              STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.FRA_PHX.dg(ora.asmgroup)
      1        ONLINE  ONLINE       ol7-19-phx1              STABLE
      2        ONLINE  ONLINE       ol7-19-phx2              STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.RECO.dg(ora.asmgroup)
      1        OFFLINE OFFLINE                               STABLE
      2        OFFLINE OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
--------------------------------------------------------------------------------
[oracle@ol7-19-phx1 ~]$

### For when DG is not online:
srvctl start diskgroup -diskgroup FRA_PHX -node "node_list"

==============================

[oracle@ol7-19-phx1 ~]$ srvctl remove diskgroup -g RECO
[oracle@ol7-19-phx1 ~]$ crsctl stat res -t -w "TYPE = ora.diskgroup.type"
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.CRS.dg(ora.asmgroup)
      1        ONLINE  ONLINE       ol7-19-phx1              STABLE
      2        ONLINE  ONLINE       ol7-19-phx2              STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DATA_PHX.dg(ora.asmgroup)
      1        ONLINE  ONLINE       ol7-19-phx1              STABLE
      2        ONLINE  ONLINE       ol7-19-phx2              STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.FRA_PHX.dg(ora.asmgroup)
      1        ONLINE  ONLINE       ol7-19-phx1              STABLE
      2        ONLINE  ONLINE       ol7-19-phx2              STABLE
      3        OFFLINE OFFLINE                               STABLE
--------------------------------------------------------------------------------
[oracle@ol7-19-phx1 ~]$

Q.E.D.

__ATA.cmd.push(function() { __ATA.initDynamicSlot({ id: 'atatags-26942-5f0a4304a75c6', location: 120, formFactor: '001', label: { text: 'Advertisements', }, creative: { reportAd: { text: 'Report this ad', }, privacySettings: { text: 'Privacy settings', } } }); });

ORA-06531: Reference to uninitialized collection

Tom Kyte - Fri, 2020-07-10 17:26
Hi Tom, I am getting "ORA-06531: Reference to uninitialized collection" even after initializing collection to assign a value. Could you please help. Please find the code below. create or replace TYPE TYP_GRP_ID is object (grp_id number); create or replace TYPE TAB_GROUP is table of TYP_GRP_ID; create or replace PACKAGE TEST AS PROCEDURE TEST; END; create or replace PACKAGE BODY TEST AS PROCEDURE TEST IS acc_arr TAB_GROUP; BEGIN acc_arr.EXTEND; acc_arr := new TAB_GROUP(null); acc_arr(acc_arr.LAST).grp_id := 1000; dbms_output.put_line(acc_arr(acc_arr.LAST).grp_id); END; END; Thanks in advance Mathew
Categories: DBA Blogs

Oracle APEX 20.2 IE 11 Depreciated

Tom Kyte - Fri, 2020-07-10 17:26
Hi, I saw the following depreciated features in Oracle APEX 20.1 release notes: <b>6.1.4 Support for Internet Explorer 11 Deprecated Support for Internet Explorer (IE) 11 is deprecated. Starting with release 20.2, only the current and prior major release of Microsoft Edge along with Google Chrome, Mozilla Firefox, Apple Safari will be supported.</b> Does it mean that Oracle APEX 20.2 will not work correctly in Microsoft IE 11? Does this statement apply only to the APEX builder?...or does it also apply to all applications in workspace? We have a lot of PC's that are still running IE 11. Thanks!
Categories: DBA Blogs

Oracle JET Paging Control in Oracle APEX

Tom Kyte - Fri, 2020-07-10 17:26
Hi, In APEX SOD, it says there will be additional Oracle JET-based components. May I know if there will include the following paging control in Oracle JET? https://www.oracle.com/webfolder/technetwork/jet/jetCookbook.html?component=pagingControl&demo=basicPagingTable I am particularly interested in replacing the pagination in classic reports by Oracle JET paging control. I haven't seen any information about how to implement these Oracle JET components into Oracle APEX. Thank you!
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator