Big-O notation (article) | Algorithms | Khan Academy (2024)

Want to join the conversation?

Log in

  • Roman

    8 years agoPosted 8 years ago. Direct link to Roman's post “I didn't understand this ...”

    I didn't understand this sentence: "Other, imprecise, upper bounds on binary search would be O(n^2), O(n^3), and O(2^n). But none of Θ(n),Θ (n^2), Θ(n^3), and Θ (2^n)would be correct to describe the running time of binary search in any case."
    Can someone explain it to me please? Thank you!

    (26 votes)

    • Cameron

      8 years agoPosted 8 years ago. Direct link to Cameron's post “Here's a couple definitio...”

      Big-O notation (article) | Algorithms | Khan Academy (4)

      Big-O notation (article) | Algorithms | Khan Academy (5)

      Big-O notation (article) | Algorithms | Khan Academy (6)

      Here's a couple definitions to keep in mind:
      if f(n) is O(g(n)) this means that f(n) grows asymptotically no faster than g(n)
      if f(n) is Θ(g(n)) this means that f(n) grows asymptotically at the same rate as g(n)

      Let's call the running time of binary search f(n).
      f(n) is k * log(n) + c ( k and c are constants)

      Asymptotically, log(n) grows no faster than log(n) (since it's the same), n, n^2, n^3 or 2^n.
      So we can say f(n) is O(log(n)), O(n), O(n^2), O(n^3), and O(2^n).

      This is similar to having x = 1, and saying x <= 1, x <= 10, x <= 100, x <= 1000, x <= 1000000.
      All of these statements are true, but the most precise statement is x <= 1. By precise, we mean that it gives us the best idea of what x actually is.

      Asymptotically, log(n) grows at the same rate as log(n) (since it is the same).
      So, we can say that f(n) is Θ( log(n) )

      This would be similar to having x=1 and then saying x = 1, which would be a precise statement that tells us what x is.

      However, asymptotically, log(n) grows slower than n, n^2, n^3 or 2^n i.e. log(n) does not grow at the same rate as these functions.
      So, we can not say f(n) is Θ(n), Θ(n^2), Θ(n^3), and Θ(2^n).

      Similarly if x = 1, we can not say that x = 10, x = 100, x = 1000, or x = 1000000.

      Hope this makes sense

      (139 votes)

  • wchargin

    9 years agoPosted 9 years ago. Direct link to wchargin's post “If I'm not mistaken, the ...”

    If I'm not mistaken, the first paragraph is a bit misleading.

    Before, we used big-Theta notation to describe the worst case running time of binary search, which is Θ(lg n). The best case running time is a completely different matter, and it is Θ(1).

    That is, there are (at least) three different types of running times that we generally consider: best case, average/expected case, and worst case. Usually it's the latter two that are the most useful. For binary search, the best case time is Θ(1), and the expected and worst case times are Θ(lg n).

    (30 votes)

  • The complexity of this article is (n^​3)​​ at least

    (30 votes)

    • purposenigeria

      7 years agoPosted 7 years ago. Direct link to purposenigeria's post “What is upper bound, low...”

      What is upper bound, lower bound and tight bound ?

      (6 votes)

  • ☺☻☺Natth4545☺☻☺

    9 years agoPosted 9 years ago. Direct link to ☺☻☺Natth4545☺☻☺'s post “And I thought I could do ...”

    And I thought I could do this. :( Cant learn everything.

    (7 votes)

    • Yahaya Aluke

      9 years agoPosted 9 years ago. Direct link to Yahaya Aluke's post “If you think you can't, y...”

      Big-O notation (article) | Algorithms | Khan Academy (15)

      Big-O notation (article) | Algorithms | Khan Academy (16)

      If you think you can't, you're right. If you think you can you're also right. It's all in how you think about it. Stick for awhile till the function storm passes, it'll surprise you how you don't even really need to know the math, just how fast some few functions growth because you have to compare the rate of growth of algorithms to them. Like knowing the order the alphabets come so you know where to place a stray alphabet.

      (36 votes)

  • Veronica

    9 years agoPosted 9 years ago. Direct link to Veronica's post “I really don't get this p...”

    I really don't get this paragraph, could someone please explain it to me in other words?? Thank you!

    "Now we have a way to characterize the running time of binary search in all cases. We can say that the running time of binary search is always O(\lg n)O(lgn). We can make a stronger statement about the worst-case running time: it's \Theta(\lg n)Θ(lgn). But for a blanket statement that covers all cases, the strongest statement we can make is that binary search runs in O(\lg n)O(lgn) time."

    (4 votes)

    • 9 years agoPosted 9 years ago. Direct link to Enrico Susatyo's post “It means that:- You can...”

      It means that:

      - You can find a scenario when binary search takes Tetha(lg n).
      - Ignoring worst case scenario, binary search will generally take Big-O(lg n) time. Because not all of the binary search runs will take Tetha(lg n).

      (3 votes)

  • mugnaio

    9 years agoPosted 9 years ago. Direct link to mugnaio's post “I didn't understand this:...”

    I didn't understand this: " If we say that a running time is Θ(f(n)) in a particular situation, then it's also O(f(n)). "
    Binary search is Θ(1) in a particular situation (best case) but it is not O(1).

    (3 votes)

    • Cameron

      9 years agoPosted 9 years ago. Direct link to Cameron's post “It looks like you are con...”

      Big-O notation (article) | Algorithms | Khan Academy (23)

      It looks like you are confusing O and Ω with worst case and best case. They are not the same. First we specify the case (worst,best, average, etc.) and then we specify O, Ω (upper bound, lower bound) or Θ (tight bounds).

      For Binary search:
      In the best case scenario (our initial guess finds the target value):
      - binary search is Θ(1) and as a result is also Ω(1) and O(1).

      In the worst case scenario (our target is not in the array)
      -binary search is Θ( log n) and as a result is also Ω( log n) and O( log n).

      (11 votes)

  • sparksro

    9 years agoPosted 9 years ago. Direct link to sparksro's post “I have a question on uppe...”

    I have a question on upper ane lower bounds to make sure my understanding is spot on. Please tell me if I am correct or not and why if not. Given the set S = {2, 3, 5, 7, 9, 12, 17, 42} A lower bound could be 2 or 3 for examaple but in set S only 2 is the tight lower bound and only 42 is the tight upper bound. Is this correct?

    (2 votes)

    • JaniceHolz

      9 years agoPosted 9 years ago. Direct link to JaniceHolz's post “A lower bound has to be l...”

      Big-O notation (article) | Algorithms | Khan Academy (27)

      A lower bound has to be less than or equal to all members of the set. Therefore, here 3 is not a lower bound because it is greater than a member of the set (2). 1 is a lower bound, -3592 is a lower bound, 1.999 is a lower bound -- because each of those is less than every member of the set.
      There is always only 1 tight lower bound: the greatest of all the lower bounds. Here, 2 is indeed the tight lower bound.
      Similarly, there is always only 1 tight upper bound: the least of all the upper bounds. Here, 42 is indeed the tight upper bound.

      Try this example:
      Given the set S2 = {-12, -5, 0, 1, 3, 3}, what is a lower bound? What is an upper bound?

      (12 votes)

  • justinj1776

    9 years agoPosted 9 years ago. Direct link to justinj1776's post “Is it absolutely correct ...”

    Is it absolutely correct to say that binary search runs in Big-O(n)?Why couldn't we say it can run in Big-O(n^2).SInce the upperbound for binary search is Big-O(log(n)) do you mean to say that we start from n and then go to n^2 for considering upperbound of an algorithm.For example if an algorithm runs in Big-Theta(n) can we say it runs in Big-O(n^2)?

    (3 votes)

    • Cameron

      9 years agoPosted 9 years ago. Direct link to Cameron's post “Binary search is Θ(log n)...”

      Binary search is Θ(log n) which means that it is O(log n) and Ω(log n)

      Since binary search is O(log n) it is also O(any function larger than log n)
      i.e. binary search is O(n), O(n^2), O(n^3), O(e^n), O(n!), etc,

      Another way to express this is by saying:
      Binary search doesn't run slower than really fast algorithms ( O(log n) ), so:
      Binary search doesn't run slower than fast algorithms ( O(n) ) , so:
      Binary search doesn't run slower than moderate to slow algorithms ( O(n^2), O(n^3) ), so:
      Binary search doesn't run slower than horribly slow algorithms ( O(e^n), O(n!) )

      Hope this make sense

      (6 votes)

  • Elvensong

    5 years agoPosted 5 years ago. Direct link to Elvensong's post “These two statements seem...”

    These two statements seem contradictory. Earlier, the article says that the running time of binary search is always Olog(n) [Para. 6]. Then it says the running time of binary search is O(n) [Para. 3 from bottom].

    I've been taught that the running time of binary search is O log(n) while linear search is O(n). Confusing!

    (3 votes)

  • Jakuiz

    a year agoPosted a year ago. Direct link to Jakuiz's post “So, very basicly: Big O n...”

    So, very basicly: Big O notation doesn't need to be precise as long as its still over the running time. In contrast, Big-Theta notation need to be precise as it is describing the running time itself.

    Is that right?

    (2 votes)

    • Cameron

      a year agoPosted a year ago. Direct link to Cameron's post “Both Big O and Big Theta ...”

      Both Big O and Big Theta are descriptions of the running time function. Big O describes the upper bound of the running time function. Big Theta describes where the upper and lower bounds of the running time function meet.

      While Big O can technically be anything over the running time, in practice, we want it to be as tight to the running time as possible. If you had an algorithm with a running time that was known to be Big O n^2 and you said it was Big O n^3 (which is technically true), people would probably try to correct you and say it is Big O n^2 (which is a better description of the running time function, because it bounds it more tightly).

      Big Theta is a more precise description of the running time function in the sense that it is where the lower bounds and upper bounds meet (it gives you the assurance that the lower bound isn't any higher, and the upper bound isn't any lower). But a Big Theta n^2 includes both 0.0001 n^2 and 999999999 n^2 + 5000000, so it isn't necessarily what one would call a "precise" description.

      (5 votes)

Big-O notation (article) | Algorithms | Khan Academy (2024)
Top Articles
Latest Posts
Article information

Author: Prof. Nancy Dach

Last Updated:

Views: 6695

Rating: 4.7 / 5 (77 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Prof. Nancy Dach

Birthday: 1993-08-23

Address: 569 Waelchi Ports, South Blainebury, LA 11589

Phone: +9958996486049

Job: Sales Manager

Hobby: Web surfing, Scuba diving, Mountaineering, Writing, Sailing, Dance, Blacksmithing

Introduction: My name is Prof. Nancy Dach, I am a lively, joyous, courageous, lovely, tender, charming, open person who loves writing and wants to share my knowledge and understanding with you.