What To Learn Java Or Javascript First

| | | | | | | | | | | | | | | | |

Contents


How to Learn Java

Java is one of the most popular programming languages in the world. If you learn to program in Java, you will have access to a wide range of possibilities.

Java is a device versatile and programming language widely supported load used for everything from the development of software for the mobile application development. In fact, as of 2019, 88% of the smartphone market share has taken place by Android and Android is written in Java.

But how did you learn to program in Java? This is the question we will address in this article. We’re going to go over our tips on how to start your Java journey. We’ll also give you a list of concrete tips on topics you can explore. By the end of this article, you will have a roadmap that will guide you in learning Java.


Why learn Java?

Java is widely used . Your Java skills will take you a long way in your career. Indeed, businesses of all sizes, even companies like Netflix, report using Java. Mentioning Java as a CV skill is a good way to conduct a software engineering interview.

Java is considered to be a relatively easy language to learn for beginners . This is because Java has a simple syntax. If you invest the right amount of time and effort. Whether you’re completely new to programming or already know a thing or two, Java can be a great place to start.

There is no shortage of online resources to help you learn Java . There are many communities online dedicated to Java development, making it incredibly easy to find the help of quality if you are stuck on long trip. In addition to communities, you will find comprehensive guides and tutorials that can help you master the basics and debug errors .


Java is hard to learn?

Learning Java can be complicated depending on how much experience you have along the way, you will encounter many challenges, but none that can not be overcome. If you have some previous programming experience, you will find it easier to learn Java.

Programming is a complex skill, which requires you to think very carefully about the instructions you want to give to a computer. For all their dazzling sophistication, computers really do not get simple again and very quickly.

It doesn’t sound like it’s hard to break a problem down into small steps, but it maybe. As with anything else, you will probably find it easier to learn Java if you have some contextual knowledge of programming.

What is Java?

Java is a generic language. This mean s it has a range of potential use cases. Java is commonly used in corporate settings , such as the financial industry. It is also used for the development of applications for Android devices.

Since the Android operating system was built using Java, you will find millions of mobile applications around the world that use Java.

Java is widely used in web development as a back end server. Indeed, Java has a range of frameworks, such as Spring and Struts, designed to facilitate back - end web development .

Many modern desktop applications have been written in Java. For example, the Minecraft video game was written in Java. There are many famous works available which extend the functions of Java to desktop applications.

How long does it take to learn Java?

With the right amount of practice and dedication, you should be able to learn the basics of Java in about three months. Expect to spend at least a year to fully familiarize yourself with the language.

Of course, this estimate depends on how much time you spend learning Java. If you study for a few hours a week, learning Java in three months is a good estimate.

Those who enroll in a programming course can learn faster depending on the pace of the course. From Obviously, those who study part-time or full time learn faster than those studying in their spare time.

How to Learn Java Online

Now we can dive into the steps you need to follow to learn Java online for free. Here are five steps that we will cover in more detail below:

  1. Think about your motivation. Set an objective.
  2. Learn the basics of Java.
  3. Create your own projects.
  4. Get help and join communities.
  5. Practice, practice, practice
  6. Think about your motivation. Set an objective.

    Before starting a new training path , consider taking the time to reflect on your motivation. In this case, remember to ask yourself the following question:?

    Why do I want to learn Java

    Thinking about this question in advance, you will have one that you can at North Star watching along your trip. In this way, if you get stuck, you will have a reason (or for reasons) to continue.

    Also, consider setting a clear goal for what you want to accomplish. Are you interested in pursuing a career in software engineering? Great! In this case, you should focus on Java and software. Looking to build mobile apps? If so, you want to master the basics of Java, so explore Java and develop mobile applications.

    L get the basics of Java.

    One mistake that many newcomer developers make is headlong into learning a programming language. It is a problem. Ignoring the fundamentals could mean that you may not have the knowledge to explore more advanced topics in the future.

    You can learn the basics of Java through a structured online Java course , books or Java tutorials.

    The Java programming language encompasses a wide range of concepts, and even experienced Java developers are still learning how to use the language better. With that said, there are a few key topics you need to master at the start of your journey. Let them explore one by one.

    Syntax

    The first step in learning any programming language is to master the syntax of the language. The developers use the word "syntax" to describe the way they write the code. The syntax refers, among others, how the code is written, which characters are used and where and how to add comments to a file.

    Here are the basics of Java you must first learn how to go further:

    • How Java Programs Work
    • Data Types Used in Java
    • Java Operators
    • Java Expressions
    • How to Write Comments in Java

    Once you have a basic understanding of these, you will be better equipped to understand the elements of Java syntax, y including conditionals, loops, array and others.

    conditional

    condition

    A conditional performs certain actions depending on whether a specific condition or a set of conditions is met.

    loop

    While programming, you may decide to run a block of code multiple times. This is where loops come in. Loops automate repetitive tasks. They reduce the need for code duplication .

    Array

    An array is a type of data that stores multiple values. These values ‚Äã‚Äãmust be of the same data type. For example, a table might contain a list of student names or a list of employee email addresses.

    The main subtopics of the Java tables under consideration are:.

    • Declaring an array
    • index matrices
    • Manipulating data in arrays
    • Declaring multidimensional arrays
    • li>
    • Copy an array

    Classes and objects

    Java is an object oriented programming language . In Java, classes and objects are used to break complex problems down into simpler pieces .

    Classes are models for objects. For example, a class can store details about a car, like the type of tire, etc.

    Objects use a class design to create a single object. For example, an object can store details about a specific car, like a Lotus 72 or Porsche 959.

    The main topics you need to master in this area are:

    Inheritance, Polymorphism, and Encapsulation

    Object-oriented programming languages like Java have many features that allow developers to improve code efficiency and reduce code repetition.

    in Java Once you learn classes, there are three basic object-oriented concepts that you should know. These are:

    • inheritance . Describes how to define a new class using the properties of an existing class
    • Polymorphism: describes how an object can take multiple forms in a program
    • Encapsulation: .. a technique used to group fields and methods into a class

    in addition, take the time to learn the override method and the super > keyword, both of which are related to Java inheritance.

    Data Structures

    Data structures refer to systems that allow information to be stored in specific ways. An array, which we discussed earlier in this guide, is an example of a Java data structure. Java also offers a wide variety of other data structures that you can use

    Here are some of the most common data structures you should learn:

    Debugging

    Even the best programmers make mistakes at some point. The programmers use the debugging to identify errors patch into their code. Being able to effectively debug a program reduces the impact of errors in your code.

    To better understand debugging in Java, study the following topics:.

    Ultimately, mastering the fundamentals will give you a better understanding of how the Java programming language works. So if someone asks, "What does encapsulation mean ?"¬ª, You will be able to respond effectively. In addition, once you master the basics, you’re ready to move on to the next step in your journey towards the learning Java:. The creation of projects

    Learn Java online for free

    not need to spend the money on winning how to program in Java. There are many free online resources that you can use to master the Java programming language.

    Java online course

    Learn Java with Codecademy

    • Cost: Free
    • Audience: Beginners

    This online course the basics of Java and object oriented programming. Learning Java takes 25 hours. During this time, you will be creating seven projects that will help you practice your skills.

    Java programming fundamentals and Duke University Software Engineering < / a>

    • Cost: free
    • Audience: beginners

    This course covers the basics of programming in Java. You will be working on a number of projects throughout to develop your understanding of basic programming ideas . Then you will work on a key project to put everything you learned in the course into practice. Target

    Java tutorial for beginners

    • Cost: Free
    • Audience: Beginners

    This tutorial includes over 16 hours of Java programming language material. You will cover the basics of Java, and how Java collection data types work.

    Java Books Online

    Head First Java by Kathy Sierra and Bert Bates

    Head First Java does what it says on the cover: it provides a detailed introduction to Java for novice programmers. You will cover everything from the basics of programming to inputs and outputs.

    Java: Programming Basics for Absolute Beginners by Nathan Clark

    This book is a step by step guide on how to code Java. With the help of 57 practical examples, you will know very little or nothing about Java to have a good understanding of the fundamentals. This book is about Variables, Java Development Kit, Decision Making and More.

    Java: Beginner ’s Guide

    Java: Beginner’ s Guide begins with the basics of Java and writing a program. Then you move on to discussing intermediate and advanced concepts. This will help you develop a deep understanding of Java.

    The book includes a series of hands on exercises to test your skills. You will also find sample annotated syntax illustrating how some concepts work.

    Java Resources Online

    mean dlearn.co.uk/java / getting_started_with_java .html "target =" _ blank "rel =" noopener"> Home and learn Java

    This free online tutorial is for beginners who want to start programming in Java. This guide will get you all the concepts needed to master the basics of Java.

    Java Geek code

    This site has a wide range of tutorials and code snippets covering concepts for beginners, intermediates and advanced.

    Java 101

    Java 101 is a free online course that gives you a taste of programming in Java. This course is ideal if you are a beginner. There are many examples that you can refer to as you build your knowledge.

    Oracle Tutorial

    Oracle has a free online Java tutorial covering both the basics and more advanced concepts. These tutorials are useful if you are ready to practice your Java skills or if you need a reference guide for one. particular concept.

    You should explore a few different options before choosing one to focus your attention on a tutorial that meets your needs. With so many options, you shouldn’t have a problem finding one. tutorial that’s right for you .

    The best ways to learn java

    Create your own projects.

    While learning theory is important, there is There is no substitute for building your own designs. This will strengthen the skills you develop. Building your projects encourages you to think about a p problem in depth. You will learn to use analytical thinking to find a solution to the problem that you encounter

    the best way to learn Java is, after learning the theory, to move on to creating practical projects. Even working on small and simple projects can give you new insight into the theory you have learned. This will allow you to improve your knowledge of how the Java programming language works.

    Here are some ideas on what you could build:.

    • An online quiz game for programmers
    • A tool that keeps track of your favorite books
    • A simple online chat application allows you to communicate with your friends
    • A mobile application for currency conversion
    • At you review your Java knowledge

    But don’t not let our tips limit you using flashCard tool . If you have an idea, try to make it happen! When are you up for it? To start, you need to start small to make sure that you are working towards the goals that you can achieve. So, once you get familiar with Java, you can take on new challenges.

    Get help and join communities.

    Another common mistake new programmers make is programming by themselves. "I will learn on my own, I will share my skills with others when I am done. Is a common refrain.

    Learning on your own makes you feel more comfortable, but it also mean s that you will be hard pressed to find support as soon as you need it.

    It’s important that you take control of your learning, but you shouldn’t be afraid to ask for help when you need it. It is very likely that another programmer has encountered the problem you have encountered at some point! Asking people for help is a great way to find a solution to the challenges you are facing.

    You may be wondering " Where can I find people who can help me on my journey ?" Well, luckily for you, the internet is full of communities for programmers of all skill levels, from beginners to experts.

    As a novice Java developer, you can join communities like Dev.to < / a>, CodeGym Help and Stack Overflow , all of which have dedicated areas for Java development. You can also subscribe to the learn java subreddit on Reddit. These communities are great places to meet other developers who can help you on your Java mastery journey.

    Practice, practice, practice.

    Practice is key to learning any skill, but it’s especially important when learning a programming language like Java. If you’re not convinced, here are some reasons why practice is so important when learning Java:

    • Practice makes it easy to find your mistakes. As you gain more experience, it will be easier for you to identify and correct your past mistakes.
    • Practice encourages you to move forward. The more you practice, the more likely you are to continue on your path to learning Java.
    • Practice helps you master best practices. The only way to know how to write effective Java code is to try writing code your way. So you can update your work as you learn new techniques and best practices.

    As the saying goes: practice makes perfect. If you’re having trouble making a schedule, there’s one rule that can help: schedule daily. Try to practice as much as possible so that you have frequent opportunities to work your programming muscles.

    Conclusion

    Java is a great programming language to learn, whether you are new to programming or are already an experienced programmer.

    At the start of your journey, you should focus on mastering the fundamentals: syntax, conditionals, loops, debugging, etc. Once you are familiar with the basics of Java, you can start working on some projects.

    The benefits of learning Java are clear. You will learn a new skill that can help you start a career in technology. You can also use Java to troubleshoot problems you have with the code. With the tips we’ve covered in this guide, you’re ready to begin your Java programming learning journey.

    Python.Engineering has created a directory of Java resources that can

    What To Learn Java Or Javascript First: StackOverflow Questions

    How can I make a time delay in Python?

    I would like to know how to put a time delay in a Python script.

    Answer #1:

    import time
    time.sleep(5)   # Delays for 5 seconds. You can also use a float value.
    

    Here is another example where something is run approximately once a minute:

    import time
    while True:
        print("This prints once a minute.")
        time.sleep(60) # Delay for 1 minute (60 seconds).
    

    Answer #2:

    You can use the sleep() function in the time module. It can take a float argument for sub-second resolution.

    from time import sleep
    sleep(0.1) # Time in seconds
    

    Answer #3:

    How can I make a time delay in Python?

    In a single thread I suggest the sleep function:

    >>> from time import sleep
    
    >>> sleep(4)
    

    This function actually suspends the processing of the thread in which it is called by the operating system, allowing other threads and processes to execute while it sleeps.

    Use it for that purpose, or simply to delay a function from executing. For example:

    >>> def party_time():
    ...     print("hooray!")
    ...
    >>> sleep(3); party_time()
    hooray!
    

    "hooray!" is printed 3 seconds after I hit Enter.

    Example using sleep with multiple threads and processes

    Again, sleep suspends your thread - it uses next to zero processing power.

    To demonstrate, create a script like this (I first attempted this in an interactive Python 3.5 shell, but sub-processes can"t find the party_later function for some reason):

    from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor, as_completed
    from time import sleep, time
    
    def party_later(kind="", n=""):
        sleep(3)
        return kind + n + " party time!: " + __name__
    
    def main():
        with ProcessPoolExecutor() as proc_executor:
            with ThreadPoolExecutor() as thread_executor:
                start_time = time()
                proc_future1 = proc_executor.submit(party_later, kind="proc", n="1")
                proc_future2 = proc_executor.submit(party_later, kind="proc", n="2")
                thread_future1 = thread_executor.submit(party_later, kind="thread", n="1")
                thread_future2 = thread_executor.submit(party_later, kind="thread", n="2")
                for f in as_completed([
                  proc_future1, proc_future2, thread_future1, thread_future2,]):
                    print(f.result())
                end_time = time()
        print("total time to execute four 3-sec functions:", end_time - start_time)
    
    if __name__ == "__main__":
        main()
    

    Example output from this script:

    thread1 party time!: __main__
    thread2 party time!: __main__
    proc1 party time!: __mp_main__
    proc2 party time!: __mp_main__
    total time to execute four 3-sec functions: 3.4519670009613037
    

    Multithreading

    You can trigger a function to be called at a later time in a separate thread with the Timer threading object:

    >>> from threading import Timer
    >>> t = Timer(3, party_time, args=None, kwargs=None)
    >>> t.start()
    >>>
    >>> hooray!
    
    >>>
    

    The blank line illustrates that the function printed to my standard output, and I had to hit Enter to ensure I was on a prompt.

    The upside of this method is that while the Timer thread was waiting, I was able to do other things, in this case, hitting Enter one time - before the function executed (see the first empty prompt).

    There isn"t a respective object in the multiprocessing library. You can create one, but it probably doesn"t exist for a reason. A sub-thread makes a lot more sense for a simple timer than a whole new subprocess.

    Answer #4:

    Delays can be also implemented by using the following methods.

    The first method:

    import time
    time.sleep(5) # Delay for 5 seconds.
    

    The second method to delay would be using the implicit wait method:

     driver.implicitly_wait(5)
    

    The third method is more useful when you have to wait until a particular action is completed or until an element is found:

    self.wait.until(EC.presence_of_element_located((By.ID, "UserName"))
    

    How to delete a file or folder in Python?

    How do I delete a file or folder in Python?

    Answer #1:


    Path objects from the Python 3.4+ pathlib module also expose these instance methods:

    What To Learn Java Or Javascript First: StackOverflow Questions

    How to get an absolute file path in Python

    Question by izb

    Given a path such as "mydir/myfile.txt", how do I find the file"s absolute path relative to the current working directory in Python? E.g. on Windows, I might end up with:

    "C:/example/cwd/mydir/myfile.txt"
    

    Answer #1:

    >>> import os
    >>> os.path.abspath("mydir/myfile.txt")
    "C:/example/cwd/mydir/myfile.txt"
    

    Also works if it is already an absolute path:

    >>> import os
    >>> os.path.abspath("C:/example/cwd/mydir/myfile.txt")
    "C:/example/cwd/mydir/myfile.txt"
    

    What does from __future__ import absolute_import actually do?

    I have answered a question regarding absolute imports in Python, which I thought I understood based on reading the Python 2.5 changelog and accompanying PEP. However, upon installing Python 2.5 and attempting to craft an example of properly using from __future__ import absolute_import, I realize things are not so clear.

    Straight from the changelog linked above, this statement accurately summarized my understanding of the absolute import change:

    Let"s say you have a package directory like this:

    pkg/
    pkg/__init__.py
    pkg/main.py
    pkg/string.py
    

    This defines a package named pkg containing the pkg.main and pkg.string submodules.

    Consider the code in the main.py module. What happens if it executes the statement import string? In Python 2.4 and earlier, it will first look in the package"s directory to perform a relative import, finds pkg/string.py, imports the contents of that file as the pkg.string module, and that module is bound to the name "string" in the pkg.main module"s namespace.

    So I created this exact directory structure:

    $ ls -R
    .:
    pkg/
    
    ./pkg:
    __init__.py  main.py  string.py
    

    __init__.py and string.py are empty. main.py contains the following code:

    import string
    print string.ascii_uppercase
    

    As expected, running this with Python 2.5 fails with an AttributeError:

    $ python2.5 pkg/main.py
    Traceback (most recent call last):
      File "pkg/main.py", line 2, in <module>
        print string.ascii_uppercase
    AttributeError: "module" object has no attribute "ascii_uppercase"
    

    However, further along in the 2.5 changelog, we find this (emphasis added):

    In Python 2.5, you can switch import"s behaviour to absolute imports using a from __future__ import absolute_import directive. This absolute-import behaviour will become the default in a future version (probably Python 2.7). Once absolute imports are the default, import string will always find the standard library"s version.

    I thus created pkg/main2.py, identical to main.py but with the additional future import directive. It now looks like this:

    from __future__ import absolute_import
    import string
    print string.ascii_uppercase
    

    Running this with Python 2.5, however... fails with an AttributeError:

    $ python2.5 pkg/main2.py
    Traceback (most recent call last):
      File "pkg/main2.py", line 3, in <module>
        print string.ascii_uppercase
    AttributeError: "module" object has no attribute "ascii_uppercase"
    

    This pretty flatly contradicts the statement that import string will always find the std-lib version with absolute imports enabled. What"s more, despite the warning that absolute imports are scheduled to become the "new default" behavior, I hit this same problem using both Python 2.7, with or without the __future__ directive:

    $ python2.7 pkg/main.py
    Traceback (most recent call last):
      File "pkg/main.py", line 2, in <module>
        print string.ascii_uppercase
    AttributeError: "module" object has no attribute "ascii_uppercase"
    
    $ python2.7 pkg/main2.py
    Traceback (most recent call last):
      File "pkg/main2.py", line 3, in <module>
        print string.ascii_uppercase
    AttributeError: "module" object has no attribute "ascii_uppercase"
    

    as well as Python 3.5, with or without (assuming the print statement is changed in both files):

    $ python3.5 pkg/main.py
    Traceback (most recent call last):
      File "pkg/main.py", line 2, in <module>
        print(string.ascii_uppercase)
    AttributeError: module "string" has no attribute "ascii_uppercase"
    
    $ python3.5 pkg/main2.py
    Traceback (most recent call last):
      File "pkg/main2.py", line 3, in <module>
        print(string.ascii_uppercase)
    AttributeError: module "string" has no attribute "ascii_uppercase"
    

    I have tested other variations of this. Instead of string.py, I have created an empty module -- a directory named string containing only an empty __init__.py -- and instead of issuing imports from main.py, I have cd"d to pkg and run imports directly from the REPL. Neither of these variations (nor a combination of them) changed the results above. I cannot reconcile this with what I have read about the __future__ directive and absolute imports.

    It seems to me that this is easily explicable by the following (this is from the Python 2 docs but this statement remains unchanged in the same docs for Python 3):

    sys.path

    (...)

    As initialized upon program startup, the first item of this list, path[0], is the directory containing the script that was used to invoke the Python interpreter. If the script directory is not available (e.g. if the interpreter is invoked interactively or if the script is read from standard input), path[0] is the empty string, which directs Python to search modules in the current directory first.

    So what am I missing? Why does the __future__ statement seemingly not do what it says, and what is the resolution of this contradiction between these two sections of documentation, as well as between described and actual behavior?

    Answer #1:

    The changelog is sloppily worded. from __future__ import absolute_import does not care about whether something is part of the standard library, and import string will not always give you the standard-library module with absolute imports on.

    from __future__ import absolute_import means that if you import string, Python will always look for a top-level string module, rather than current_package.string. However, it does not affect the logic Python uses to decide what file is the string module. When you do

    python pkg/script.py
    

    pkg/script.py doesn"t look like part of a package to Python. Following the normal procedures, the pkg directory is added to the path, and all .py files in the pkg directory look like top-level modules. import string finds pkg/string.py not because it"s doing a relative import, but because pkg/string.py appears to be the top-level module string. The fact that this isn"t the standard-library string module doesn"t come up.

    To run the file as part of the pkg package, you could do

    python -m pkg.script
    

    In this case, the pkg directory will not be added to the path. However, the current directory will be added to the path.

    You can also add some boilerplate to pkg/script.py to make Python treat it as part of the pkg package even when run as a file:

    if __name__ == "__main__" and __package__ is None:
        __package__ = "pkg"
    

    However, this won"t affect sys.path. You"ll need some additional handling to remove the pkg directory from the path, and if pkg"s parent directory isn"t on the path, you"ll need to stick that on the path too.

    How to check if a path is absolute path or relative path in a cross-platform way with Python?

    UNIX absolute path starts with "/", whereas Windows starts with alphabet "C:" or "". Does python have a standard function to check if a path is absolute or relative?

    Answer #1:

    os.path.isabs returns True if the path is absolute, False if not. The documentation says it works in windows (I can confirm it works in Linux personally).

    os.path.isabs(my_path)
    

    Get relative path from comparing two absolute paths

    Say, I have two absolute paths. I need to check if the location referring to by one of the paths is a descendant of the other. If true, I need to find out the relative path of the descendant from the ancestor. What"s a good way to implement this in Python? Any library that I can benefit from?

    Answer #1:

    os.path.commonprefix() and os.path.relpath() are your friends:

    >>> print os.path.commonprefix(["/usr/var/log", "/usr/var/security"])
    "/usr/var"
    >>> print os.path.commonprefix(["/tmp", "/usr/var"])  # No common prefix: the root is the common prefix
    "/"
    

    You can thus test whether the common prefix is one of the paths, i.e. if one of the paths is a common ancestor:

    paths = […, …, …]
    common_prefix = os.path.commonprefix(list_of_paths)
    if common_prefix in paths:
        …
    

    You can then find the relative paths:

    relative_paths = [os.path.relpath(path, common_prefix) for path in paths]
    

    You can even handle more than two paths, with this method, and test whether all the paths are all below one of them.

    PS: depending on how your paths look like, you might want to perform some normalization first (this is useful in situations where one does not know whether they always end with "/" or not, or if some of the paths are relative). Relevant functions include os.path.abspath() and os.path.normpath().

    PPS: as Peter Briggs mentioned in the comments, the simple approach described above can fail:

    >>> os.path.commonprefix(["/usr/var", "/usr/var2/log"])
    "/usr/var"
    

    even though /usr/var is not a common prefix of the paths. Forcing all paths to end with "/" before calling commonprefix() solves this (specific) problem.

    PPPS: as bluenote10 mentioned, adding a slash does not solve the general problem. Here is his followup question: How to circumvent the fallacy of Python's os.path.commonprefix?

    PPPPS: starting with Python 3.4, we have pathlib, a module that provides a saner path manipulation environment. I guess that the common prefix of a set of paths can be obtained by getting all the prefixes of each path (with PurePath.parents()), taking the intersection of all these parent sets, and selecting the longest common prefix.

    PPPPPS: Python 3.5 introduced a proper solution to this question: os.path.commonpath(), which returns a valid path.

    How to join absolute and relative urls?

    I have two urls:

    url1 = "http://127.0.0.1/test1/test2/test3/test5.xml"
    url2 = "../../test4/test6.xml"
    

    How can I get an absolute url for url2?

    Answer #1:

    You should use urlparse.urljoin :

    >>> import urlparse
    >>> urlparse.urljoin(url1, url2)
    "http://127.0.0.1/test1/test4/test6.xml"
    

    With Python 3 (where urlparse is renamed to urllib.parse) you could use it as follow:

    >>> import urllib.parse
    >>> urllib.parse.urljoin(url1, url2)
    "http://127.0.0.1/test1/test4/test6.xml"
    

    What To Learn Java Or Javascript First: StackOverflow Questions

    Removing white space around a saved image in matplotlib

    I need to take an image and save it after some process. The figure looks fine when I display it, but after saving the figure, I got some white space around the saved image. I have tried the "tight" option for savefig method, did not work either. The code:

      import matplotlib.image as mpimg
      import matplotlib.pyplot as plt
    
      fig = plt.figure(1)
      img = mpimg.imread(path)
      plt.imshow(img)
      ax=fig.add_subplot(1,1,1)
    
      extent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted())
      plt.savefig("1.png", bbox_inches=extent)
    
      plt.axis("off") 
      plt.show()
    

    I am trying to draw a basic graph by using NetworkX on a figure and save it. I realized that without a graph it works, but when added a graph I get white space around the saved image;

    import matplotlib.image as mpimg
    import matplotlib.pyplot as plt
    import networkx as nx
    
    G = nx.Graph()
    G.add_node(1)
    G.add_node(2)
    G.add_node(3)
    G.add_edge(1,3)
    G.add_edge(1,2)
    pos = {1:[100,120], 2:[200,300], 3:[50,75]}
    
    fig = plt.figure(1)
    img = mpimg.imread("image.jpg")
    plt.imshow(img)
    ax=fig.add_subplot(1,1,1)
    
    nx.draw(G, pos=pos)
    
    extent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted())
    plt.savefig("1.png", bbox_inches = extent)
    
    plt.axis("off") 
    plt.show()
    

    Answer #1:

    You can remove the white space padding by setting bbox_inches="tight" in savefig:

    plt.savefig("test.png",bbox_inches="tight")
    

    You"ll have to put the argument to bbox_inches as a string, perhaps this is why it didn"t work earlier for you.


    Possible duplicates:

    Matplotlib plots: removing axis, legends and white spaces

    How to set the margins for a matplotlib figure?

    Reduce left and right margins in matplotlib plot

    Answer #2:

    I cannot claim I know exactly why or how my “solution” works, but this is what I had to do when I wanted to plot the outline of a couple of aerofoil sections — without white margins — to a PDF file. (Note that I used matplotlib inside an IPython notebook, with the -pylab flag.)

    plt.gca().set_axis_off()
    plt.subplots_adjust(top = 1, bottom = 0, right = 1, left = 0, 
                hspace = 0, wspace = 0)
    plt.margins(0,0)
    plt.gca().xaxis.set_major_locator(plt.NullLocator())
    plt.gca().yaxis.set_major_locator(plt.NullLocator())
    plt.savefig("filename.pdf", bbox_inches = "tight",
        pad_inches = 0)
    

    I have tried to deactivate different parts of this, but this always lead to a white margin somewhere. You may even have modify this to keep fat lines near the limits of the figure from being shaved by the lack of margins.

    What To Learn Java Or Javascript First: StackOverflow Questions

    How do I install pip on macOS or OS X?

    I spent most of the day yesterday searching for a clear answer for installing pip (package manager for Python). I can"t find a good solution.

    How do I install it?

    Answer #1:

    UPDATE (Jan 2019):

    easy_install has been deprecated. Please use get-pip.py instead.


    Old answer:

    easy_install pip
    

    If you need admin privileges to run this, try:

    sudo easy_install pip
    

    Answer #2:

    ⚡️ TL;DR — One line solution.

    All you have to do is:

    sudo easy_install pip
    

    2019: ⚠️easy_install has been deprecated. Check Method #2 below for preferred installation!

    Details:

    ⚡️ OK, I read the solutions given above, but here"s an EASY solution to install pip.

    MacOS comes with Python installed. But to make sure that you have Python installed open the terminal and run the following command.

    python --version
    

    If this command returns a version number that means Python exists. Which also means that you already have access to easy_install considering you are using macOS/OSX.

    ℹ️ Now, all you have to do is run the following command.

    sudo easy_install pip
    

    After that, pip will be installed and you"ll be able to use it for installing other packages.

    Let me know if you have any problems installing pip this way.

    Cheers!

    P.S. I ended up blogging a post about it. QuickTip: How Do I Install pip on macOS or OS X?


    ✅ UPDATE (Jan 2019): METHOD #2: Two line solution —

    easy_install has been deprecated. Please use get-pip.py instead.

    First of all download the get-pip file

    curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
    

    Now run this file to install pip

    python get-pip.py
    

    That should do it.

    Another gif you said? Here ya go!

    Answer #3:

    You can install it through Homebrew on OS X. Why would you install Python with Homebrew?

    The version of Python that ships with OS X is great for learning but it’s not good for development. The version shipped with OS X may be out of date from the official current Python release, which is considered the stable production version. (source)

    Homebrew is something of a package manager for OS X. Find more details on the Homebrew page. Once Homebrew is installed, run the following to install the latest Python, Pip & Setuptools:

    brew install python
    

    Answer #4:

    I"m surprised no-one has mentioned this - since 2013, python itself is capable of installing pip, no external commands (and no internet connection) required.

    sudo -H python -m ensurepip
    

    This will create a similar install to what easy_install would.

    Answer #5:

    On Mac:

    1. Install easy_install

      curl https://bootstrap.pypa.io/ez_setup.py -o - | sudo python
      
    2. Install pip

      sudo easy_install pip
      
    3. Now, you could install external modules. For example

      pip install regex   # This is only an example for installing other modules
      

    What To Learn Java Or Javascript First: StackOverflow Questions

    How do I merge two dictionaries in a single expression (taking union of dictionaries)?

    Question by Carl Meyer

    I have two Python dictionaries, and I want to write a single expression that returns these two dictionaries, merged (i.e. taking the union). The update() method would be what I need, if it returned its result instead of modifying a dictionary in-place.

    >>> x = {"a": 1, "b": 2}
    >>> y = {"b": 10, "c": 11}
    >>> z = x.update(y)
    >>> print(z)
    None
    >>> x
    {"a": 1, "b": 10, "c": 11}
    

    How can I get that final merged dictionary in z, not x?

    (To be extra-clear, the last-one-wins conflict-handling of dict.update() is what I"m looking for as well.)

    Answer #1:

    How can I merge two Python dictionaries in a single expression?

    For dictionaries x and y, z becomes a shallowly-merged dictionary with values from y replacing those from x.

    • In Python 3.9.0 or greater (released 17 October 2020): PEP-584, discussed here, was implemented and provides the simplest method:

      z = x | y          # NOTE: 3.9+ ONLY
      
    • In Python 3.5 or greater:

      z = {**x, **y}
      
    • In Python 2, (or 3.4 or lower) write a function:

      def merge_two_dicts(x, y):
          z = x.copy()   # start with keys and values of x
          z.update(y)    # modifies z with keys and values of y
          return z
      

      and now:

      z = merge_two_dicts(x, y)
      

    Explanation

    Say you have two dictionaries and you want to merge them into a new dictionary without altering the original dictionaries:

    x = {"a": 1, "b": 2}
    y = {"b": 3, "c": 4}
    

    The desired result is to get a new dictionary (z) with the values merged, and the second dictionary"s values overwriting those from the first.

    >>> z
    {"a": 1, "b": 3, "c": 4}
    

    A new syntax for this, proposed in PEP 448 and available as of Python 3.5, is

    z = {**x, **y}
    

    And it is indeed a single expression.

    Note that we can merge in with literal notation as well:

    z = {**x, "foo": 1, "bar": 2, **y}
    

    and now:

    >>> z
    {"a": 1, "b": 3, "foo": 1, "bar": 2, "c": 4}
    

    It is now showing as implemented in the release schedule for 3.5, PEP 478, and it has now made its way into the What"s New in Python 3.5 document.

    However, since many organizations are still on Python 2, you may wish to do this in a backward-compatible way. The classically Pythonic way, available in Python 2 and Python 3.0-3.4, is to do this as a two-step process:

    z = x.copy()
    z.update(y) # which returns None since it mutates z
    

    In both approaches, y will come second and its values will replace x"s values, thus b will point to 3 in our final result.

    Not yet on Python 3.5, but want a single expression

    If you are not yet on Python 3.5 or need to write backward-compatible code, and you want this in a single expression, the most performant while the correct approach is to put it in a function:

    def merge_two_dicts(x, y):
        """Given two dictionaries, merge them into a new dict as a shallow copy."""
        z = x.copy()
        z.update(y)
        return z
    

    and then you have a single expression:

    z = merge_two_dicts(x, y)
    

    You can also make a function to merge an arbitrary number of dictionaries, from zero to a very large number:

    def merge_dicts(*dict_args):
        """
        Given any number of dictionaries, shallow copy and merge into a new dict,
        precedence goes to key-value pairs in latter dictionaries.
        """
        result = {}
        for dictionary in dict_args:
            result.update(dictionary)
        return result
    

    This function will work in Python 2 and 3 for all dictionaries. e.g. given dictionaries a to g:

    z = merge_dicts(a, b, c, d, e, f, g) 
    

    and key-value pairs in g will take precedence over dictionaries a to f, and so on.

    Critiques of Other Answers

    Don"t use what you see in the formerly accepted answer:

    z = dict(x.items() + y.items())
    

    In Python 2, you create two lists in memory for each dict, create a third list in memory with length equal to the length of the first two put together, and then discard all three lists to create the dict. In Python 3, this will fail because you"re adding two dict_items objects together, not two lists -

    >>> c = dict(a.items() + b.items())
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    TypeError: unsupported operand type(s) for +: "dict_items" and "dict_items"
    

    and you would have to explicitly create them as lists, e.g. z = dict(list(x.items()) + list(y.items())). This is a waste of resources and computation power.

    Similarly, taking the union of items() in Python 3 (viewitems() in Python 2.7) will also fail when values are unhashable objects (like lists, for example). Even if your values are hashable, since sets are semantically unordered, the behavior is undefined in regards to precedence. So don"t do this:

    >>> c = dict(a.items() | b.items())
    

    This example demonstrates what happens when values are unhashable:

    >>> x = {"a": []}
    >>> y = {"b": []}
    >>> dict(x.items() | y.items())
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    TypeError: unhashable type: "list"
    

    Here"s an example where y should have precedence, but instead the value from x is retained due to the arbitrary order of sets:

    >>> x = {"a": 2}
    >>> y = {"a": 1}
    >>> dict(x.items() | y.items())
    {"a": 2}
    

    Another hack you should not use:

    z = dict(x, **y)
    

    This uses the dict constructor and is very fast and memory-efficient (even slightly more so than our two-step process) but unless you know precisely what is happening here (that is, the second dict is being passed as keyword arguments to the dict constructor), it"s difficult to read, it"s not the intended usage, and so it is not Pythonic.

    Here"s an example of the usage being remediated in django.

    Dictionaries are intended to take hashable keys (e.g. frozensets or tuples), but this method fails in Python 3 when keys are not strings.

    >>> c = dict(a, **b)
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    TypeError: keyword arguments must be strings
    

    From the mailing list, Guido van Rossum, the creator of the language, wrote:

    I am fine with declaring dict({}, **{1:3}) illegal, since after all it is abuse of the ** mechanism.

    and

    Apparently dict(x, **y) is going around as "cool hack" for "call x.update(y) and return x". Personally, I find it more despicable than cool.

    It is my understanding (as well as the understanding of the creator of the language) that the intended usage for dict(**y) is for creating dictionaries for readability purposes, e.g.:

    dict(a=1, b=10, c=11)
    

    instead of

    {"a": 1, "b": 10, "c": 11}
    

    Response to comments

    Despite what Guido says, dict(x, **y) is in line with the dict specification, which btw. works for both Python 2 and 3. The fact that this only works for string keys is a direct consequence of how keyword parameters work and not a short-coming of dict. Nor is using the ** operator in this place an abuse of the mechanism, in fact, ** was designed precisely to pass dictionaries as keywords.

    Again, it doesn"t work for 3 when keys are not strings. The implicit calling contract is that namespaces take ordinary dictionaries, while users must only pass keyword arguments that are strings. All other callables enforced it. dict broke this consistency in Python 2:

    >>> foo(**{("a", "b"): None})
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    TypeError: foo() keywords must be strings
    >>> dict(**{("a", "b"): None})
    {("a", "b"): None}
    

    This inconsistency was bad given other implementations of Python (PyPy, Jython, IronPython). Thus it was fixed in Python 3, as this usage could be a breaking change.

    I submit to you that it is malicious incompetence to intentionally write code that only works in one version of a language or that only works given certain arbitrary constraints.

    More comments:

    dict(x.items() + y.items()) is still the most readable solution for Python 2. Readability counts.

    My response: merge_two_dicts(x, y) actually seems much clearer to me, if we"re actually concerned about readability. And it is not forward compatible, as Python 2 is increasingly deprecated.

    {**x, **y} does not seem to handle nested dictionaries. the contents of nested keys are simply overwritten, not merged [...] I ended up being burnt by these answers that do not merge recursively and I was surprised no one mentioned it. In my interpretation of the word "merging" these answers describe "updating one dict with another", and not merging.

    Yes. I must refer you back to the question, which is asking for a shallow merge of two dictionaries, with the first"s values being overwritten by the second"s - in a single expression.

    Assuming two dictionaries of dictionaries, one might recursively merge them in a single function, but you should be careful not to modify the dictionaries from either source, and the surest way to avoid that is to make a copy when assigning values. As keys must be hashable and are usually therefore immutable, it is pointless to copy them:

    from copy import deepcopy
    
    def dict_of_dicts_merge(x, y):
        z = {}
        overlapping_keys = x.keys() & y.keys()
        for key in overlapping_keys:
            z[key] = dict_of_dicts_merge(x[key], y[key])
        for key in x.keys() - overlapping_keys:
            z[key] = deepcopy(x[key])
        for key in y.keys() - overlapping_keys:
            z[key] = deepcopy(y[key])
        return z
    

    Usage:

    >>> x = {"a":{1:{}}, "b": {2:{}}}
    >>> y = {"b":{10:{}}, "c": {11:{}}}
    >>> dict_of_dicts_merge(x, y)
    {"b": {2: {}, 10: {}}, "a": {1: {}}, "c": {11: {}}}
    

    Coming up with contingencies for other value types is far beyond the scope of this question, so I will point you at my answer to the canonical question on a "Dictionaries of dictionaries merge".

    Less Performant But Correct Ad-hocs

    These approaches are less performant, but they will provide correct behavior. They will be much less performant than copy and update or the new unpacking because they iterate through each key-value pair at a higher level of abstraction, but they do respect the order of precedence (latter dictionaries have precedence)

    You can also chain the dictionaries manually inside a dict comprehension:

    {k: v for d in dicts for k, v in d.items()} # iteritems in Python 2.7
    

    or in Python 2.6 (and perhaps as early as 2.4 when generator expressions were introduced):

    dict((k, v) for d in dicts for k, v in d.items()) # iteritems in Python 2
    

    itertools.chain will chain the iterators over the key-value pairs in the correct order:

    from itertools import chain
    z = dict(chain(x.items(), y.items())) # iteritems in Python 2
    

    Performance Analysis

    I"m only going to do the performance analysis of the usages known to behave correctly. (Self-contained so you can copy and paste yourself.)

    from timeit import repeat
    from itertools import chain
    
    x = dict.fromkeys("abcdefg")
    y = dict.fromkeys("efghijk")
    
    def merge_two_dicts(x, y):
        z = x.copy()
        z.update(y)
        return z
    
    min(repeat(lambda: {**x, **y}))
    min(repeat(lambda: merge_two_dicts(x, y)))
    min(repeat(lambda: {k: v for d in (x, y) for k, v in d.items()}))
    min(repeat(lambda: dict(chain(x.items(), y.items()))))
    min(repeat(lambda: dict(item for d in (x, y) for item in d.items())))
    

    In Python 3.8.1, NixOS:

    >>> min(repeat(lambda: {**x, **y}))
    1.0804965235292912
    >>> min(repeat(lambda: merge_two_dicts(x, y)))
    1.636518670246005
    >>> min(repeat(lambda: {k: v for d in (x, y) for k, v in d.items()}))
    3.1779992282390594
    >>> min(repeat(lambda: dict(chain(x.items(), y.items()))))
    2.740647904574871
    >>> min(repeat(lambda: dict(item for d in (x, y) for item in d.items())))
    4.266070580109954
    
    $ uname -a
    Linux nixos 4.19.113 #1-NixOS SMP Wed Mar 25 07:06:15 UTC 2020 x86_64 GNU/Linux
    

    Resources on Dictionaries

    Answer #2:

    In your case, what you can do is:

    z = dict(list(x.items()) + list(y.items()))
    

    This will, as you want it, put the final dict in z, and make the value for key b be properly overridden by the second (y) dict"s value:

    >>> x = {"a":1, "b": 2}
    >>> y = {"b":10, "c": 11}
    >>> z = dict(list(x.items()) + list(y.items()))
    >>> z
    {"a": 1, "c": 11, "b": 10}
    
    

    If you use Python 2, you can even remove the list() calls. To create z:

    >>> z = dict(x.items() + y.items())
    >>> z
    {"a": 1, "c": 11, "b": 10}
    

    If you use Python version 3.9.0a4 or greater, then you can directly use:

    x = {"a":1, "b": 2}
    y = {"b":10, "c": 11}
    z = x | y
    print(z)
    
    {"a": 1, "c": 11, "b": 10}
    

    Answer #3:

    An alternative:

    z = x.copy()
    z.update(y)
    

    Answer #4:

    Another, more concise, option:

    z = dict(x, **y)
    

    Note: this has become a popular answer, but it is important to point out that if y has any non-string keys, the fact that this works at all is an abuse of a CPython implementation detail, and it does not work in Python 3, or in PyPy, IronPython, or Jython. Also, Guido is not a fan. So I can"t recommend this technique for forward-compatible or cross-implementation portable code, which really means it should be avoided entirely.

    Answer #5:

    This probably won"t be a popular answer, but you almost certainly do not want to do this. If you want a copy that"s a merge, then use copy (or deepcopy, depending on what you want) and then update. The two lines of code are much more readable - more Pythonic - than the single line creation with .items() + .items(). Explicit is better than implicit.

    In addition, when you use .items() (pre Python 3.0), you"re creating a new list that contains the items from the dict. If your dictionaries are large, then that is quite a lot of overhead (two large lists that will be thrown away as soon as the merged dict is created). update() can work more efficiently, because it can run through the second dict item-by-item.

    In terms of time:

    >>> timeit.Timer("dict(x, **y)", "x = dict(zip(range(1000), range(1000)))
    y=dict(zip(range(1000,2000), range(1000,2000)))").timeit(100000)
    15.52571702003479
    >>> timeit.Timer("temp = x.copy()
    temp.update(y)", "x = dict(zip(range(1000), range(1000)))
    y=dict(zip(range(1000,2000), range(1000,2000)))").timeit(100000)
    15.694622993469238
    >>> timeit.Timer("dict(x.items() + y.items())", "x = dict(zip(range(1000), range(1000)))
    y=dict(zip(range(1000,2000), range(1000,2000)))").timeit(100000)
    41.484580039978027
    

    IMO the tiny slowdown between the first two is worth it for the readability. In addition, keyword arguments for dictionary creation was only added in Python 2.3, whereas copy() and update() will work in older versions.

    What To Learn Java Or Javascript First: StackOverflow Questions

    Finding the index of an item in a list

    Given a list ["foo", "bar", "baz"] and an item in the list "bar", how do I get its index (1) in Python?

    Answer #1:

    >>> ["foo", "bar", "baz"].index("bar")
    1
    

    Reference: Data Structures > More on Lists

    Caveats follow

    Note that while this is perhaps the cleanest way to answer the question as asked, index is a rather weak component of the list API, and I can"t remember the last time I used it in anger. It"s been pointed out to me in the comments that because this answer is heavily referenced, it should be made more complete. Some caveats about list.index follow. It is probably worth initially taking a look at the documentation for it:

    list.index(x[, start[, end]])
    

    Return zero-based index in the list of the first item whose value is equal to x. Raises a ValueError if there is no such item.

    The optional arguments start and end are interpreted as in the slice notation and are used to limit the search to a particular subsequence of the list. The returned index is computed relative to the beginning of the full sequence rather than the start argument.

    Linear time-complexity in list length

    An index call checks every element of the list in order, until it finds a match. If your list is long, and you don"t know roughly where in the list it occurs, this search could become a bottleneck. In that case, you should consider a different data structure. Note that if you know roughly where to find the match, you can give index a hint. For instance, in this snippet, l.index(999_999, 999_990, 1_000_000) is roughly five orders of magnitude faster than straight l.index(999_999), because the former only has to search 10 entries, while the latter searches a million:

    >>> import timeit
    >>> timeit.timeit("l.index(999_999)", setup="l = list(range(0, 1_000_000))", number=1000)
    9.356267921015387
    >>> timeit.timeit("l.index(999_999, 999_990, 1_000_000)", setup="l = list(range(0, 1_000_000))", number=1000)
    0.0004404920036904514
     
    

    Only returns the index of the first match to its argument

    A call to index searches through the list in order until it finds a match, and stops there. If you expect to need indices of more matches, you should use a list comprehension, or generator expression.

    >>> [1, 1].index(1)
    0
    >>> [i for i, e in enumerate([1, 2, 1]) if e == 1]
    [0, 2]
    >>> g = (i for i, e in enumerate([1, 2, 1]) if e == 1)
    >>> next(g)
    0
    >>> next(g)
    2
    

    Most places where I once would have used index, I now use a list comprehension or generator expression because they"re more generalizable. So if you"re considering reaching for index, take a look at these excellent Python features.

    Throws if element not present in list

    A call to index results in a ValueError if the item"s not present.

    >>> [1, 1].index(2)
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    ValueError: 2 is not in list
    

    If the item might not be present in the list, you should either

    1. Check for it first with item in my_list (clean, readable approach), or
    2. Wrap the index call in a try/except block which catches ValueError (probably faster, at least when the list to search is long, and the item is usually present.)

    Answer #2:

    One thing that is really helpful in learning Python is to use the interactive help function:

    >>> help(["foo", "bar", "baz"])
    Help on list object:
    
    class list(object)
     ...
    
     |
     |  index(...)
     |      L.index(value, [start, [stop]]) -> integer -- return first index of value
     |
    

    which will often lead you to the method you are looking for.

    Answer #3:

    The majority of answers explain how to find a single index, but their methods do not return multiple indexes if the item is in the list multiple times. Use enumerate():

    for i, j in enumerate(["foo", "bar", "baz"]):
        if j == "bar":
            print(i)
    

    The index() function only returns the first occurrence, while enumerate() returns all occurrences.

    As a list comprehension:

    [i for i, j in enumerate(["foo", "bar", "baz"]) if j == "bar"]
    

    Here"s also another small solution with itertools.count() (which is pretty much the same approach as enumerate):

    from itertools import izip as zip, count # izip for maximum efficiency
    [i for i, j in zip(count(), ["foo", "bar", "baz"]) if j == "bar"]
    

    This is more efficient for larger lists than using enumerate():

    $ python -m timeit -s "from itertools import izip as zip, count" "[i for i, j in zip(count(), ["foo", "bar", "baz"]*500) if j == "bar"]"
    10000 loops, best of 3: 174 usec per loop
    $ python -m timeit "[i for i, j in enumerate(["foo", "bar", "baz"]*500) if j == "bar"]"
    10000 loops, best of 3: 196 usec per loop
    

    Answer #4:

    To get all indexes:

    indexes = [i for i,x in enumerate(xs) if x == "foo"]
    

    Answer #5:

    index() returns the first index of value!

    | index(...)
    | L.index(value, [start, [stop]]) -> integer -- return first index of value

    def all_indices(value, qlist):
        indices = []
        idx = -1
        while True:
            try:
                idx = qlist.index(value, idx+1)
                indices.append(idx)
            except ValueError:
                break
        return indices
    
    all_indices("foo", ["foo";"bar";"baz";"foo"])
    

    What To Learn Java Or Javascript First: StackOverflow Questions

    InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately

    Tried to perform REST GET through python requests with the following code and I got error.

    Code snip:

    import requests
    header = {"Authorization": "Bearer..."}
    url = az_base_url + az_subscription_id + "/resourcegroups/Default-Networking/resources?" + az_api_version
    r = requests.get(url, headers=header)
    

    Error:

    /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:79: 
              InsecurePlatformWarning: A true SSLContext object is not available. 
              This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. 
              For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
      InsecurePlatformWarning
    

    My python version is 2.7.3. I tried to install urllib3 and requests[security] as some other thread suggests, I still got the same error.

    Wonder if anyone can provide some tips?

    Answer #1:

    The docs give a fair indicator of what"s required., however requests allow us to skip a few steps:

    You only need to install the security package extras (thanks @admdrew for pointing it out)

    $ pip install requests[security]
    

    or, install them directly:

    $ pip install pyopenssl ndg-httpsclient pyasn1
    

    Requests will then automatically inject pyopenssl into urllib3


    If you"re on ubuntu, you may run into trouble installing pyopenssl, you"ll need these dependencies:

    $ apt-get install libffi-dev libssl-dev
    

    Answer #2:

    If you are not able to upgrade your Python version to 2.7.9, and want to suppress warnings,

    you can downgrade your "requests" version to 2.5.3:

    pip install requests==2.5.3
    

    Bugfix disclosure / Warning introduced in 2.6.0

    Dynamic instantiation from string name of a class in dynamically imported module?

    In python, I have to instantiate certain class, knowing its name in a string, but this class "lives" in a dynamically imported module. An example follows:

    loader-class script:

    import sys
    class loader:
      def __init__(self, module_name, class_name): # both args are strings
        try:
          __import__(module_name)
          modul = sys.modules[module_name]
          instance = modul.class_name() # obviously this doesn"t works, here is my main problem!
        except ImportError:
           # manage import error
    

    some-dynamically-loaded-module script:

    class myName:
      # etc...
    

    I use this arrangement to make any dynamically-loaded-module to be used by the loader-class following certain predefined behaviours in the dyn-loaded-modules...

    Answer #1:

    You can use getattr

    getattr(module, class_name)
    

    to access the class. More complete code:

    module = __import__(module_name)
    class_ = getattr(module, class_name)
    instance = class_()
    

    As mentioned below, we may use importlib

    import importlib
    module = importlib.import_module(module_name)
    class_ = getattr(module, class_name)
    instance = class_()
    

    Answer #2:

    tl;dr

    Import the root module with importlib.import_module and load the class by its name using getattr function:

    # Standard import
    import importlib
    # Load "module.submodule.MyClass"
    MyClass = getattr(importlib.import_module("module.submodule"), "MyClass")
    # Instantiate the class (pass arguments to the constructor, if needed)
    instance = MyClass()
    

    explanations

    You probably don"t want to use __import__ to dynamically import a module by name, as it does not allow you to import submodules:

    >>> mod = __import__("os.path")
    >>> mod.join
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    AttributeError: "module" object has no attribute "join"
    

    Here is what the python doc says about __import__:

    Note: This is an advanced function that is not needed in everyday Python programming, unlike importlib.import_module().

    Instead, use the standard importlib module to dynamically import a module by name. With getattr you can then instantiate a class by its name:

    import importlib
    my_module = importlib.import_module("module.submodule")
    MyClass = getattr(my_module, "MyClass")
    instance = MyClass()
    

    You could also write:

    import importlib
    module_name, class_name = "module.submodule.MyClass".rsplit(".", 1)
    MyClass = getattr(importlib.import_module(module_name), class_name)
    instance = MyClass()
    

    This code is valid in python ‚â• 2.7 (including python 3).

    pandas loc vs. iloc vs. at vs. iat?

    Recently began branching out from my safe place (R) into Python and and am a bit confused by the cell localization/selection in Pandas. I"ve read the documentation but I"m struggling to understand the practical implications of the various localization/selection options.

    Is there a reason why I should ever use .loc or .iloc over at, and iat or vice versa? In what situations should I use which method?


    Note: future readers be aware that this question is old and was written before pandas v0.20 when there used to exist a function called .ix. This method was later split into two - loc and iloc - to make the explicit distinction between positional and label based indexing. Please beware that ix was discontinued due to inconsistent behavior and being hard to grok, and no longer exists in current versions of pandas (>= 1.0).

    Answer #1:

    loc: only work on index
    iloc: work on position
    at: get scalar values. It"s a very fast loc
    iat: Get scalar values. It"s a very fast iloc

    Also,

    at and iat are meant to access a scalar, that is, a single element in the dataframe, while loc and iloc are ments to access several elements at the same time, potentially to perform vectorized operations.

    http://pyciencia.blogspot.com/2015/05/obtener-y-filtrar-datos-de-un-dataframe.html

    What To Learn Java Or Javascript First: StackOverflow Questions

    JSON datetime between Python and JavaScript

    Question by kevin

    I want to send a datetime.datetime object in serialized form from Python using JSON and de-serialize in JavaScript using JSON. What is the best way to do this?

    Answer #1:

    You can add the "default" parameter to json.dumps to handle this:

    date_handler = lambda obj: (
        obj.isoformat()
        if isinstance(obj, (datetime.datetime, datetime.date))
        else None
    )
    json.dumps(datetime.datetime.now(), default=date_handler)
    ""2010-04-20T20:08:21.634121""
    

    Which is ISO 8601 format.

    A more comprehensive default handler function:

    def handler(obj):
        if hasattr(obj, "isoformat"):
            return obj.isoformat()
        elif isinstance(obj, ...):
            return ...
        else:
            raise TypeError, "Object of type %s with value of %s is not JSON serializable" % (type(obj), repr(obj))
    

    Update: Added output of type as well as value.
    Update: Also handle date

    Javascript equivalent of Python"s zip function

    Is there a javascript equivalent of Python"s zip function? That is, given multiple arrays of equal lengths create an array of pairs.

    For instance, if I have three arrays that look like this:

    var array1 = [1, 2, 3];
    var array2 = ["a","b","c"];
    var array3 = [4, 5, 6];
    

    The output array should be:

    var output array:[[1,"a",4], [2,"b",5], [3,"c",6]]
    

    Answer #1:

    2016 update:

    Here"s a snazzier Ecmascript 6 version:

    zip= rows=>rows[0].map((_,c)=>rows.map(row=>row[c]))
    

    Illustration equiv. to Python{zip(*args)}:

    > zip([["row0col0", "row0col1", "row0col2"],
           ["row1col0", "row1col1", "row1col2"]]);
    [["row0col0","row1col0"],
     ["row0col1","row1col1"],
     ["row0col2","row1col2"]]
    

    (and FizzyTea points out that ES6 has variadic argument syntax, so the following function definition will act like python, but see below for disclaimer... this will not be its own inverse so zip(zip(x)) will not equal x; though as Matt Kramer points out zip(...zip(...x))==x (like in regular python zip(*zip(*x))==x))

    Alternative definition equiv. to Python{zip}:

    > zip = (...rows) => [...rows[0]].map((_,c) => rows.map(row => row[c]))
    > zip( ["row0col0", "row0col1", "row0col2"] ,
           ["row1col0", "row1col1", "row1col2"] );
                 // note zip(row0,row1), not zip(matrix)
    same answer as above
    

    (Do note that the ... syntax may have performance issues at this time, and possibly in the future, so if you use the second answer with variadic arguments, you may want to perf test it. That said it"s been quite a while since it"s been in the standard.)

    Make sure to note the addendum if you wish to use this on strings (perhaps there"s a better way to do it now with es6 iterables).


    Here"s a oneliner:

    function zip(arrays) {
        return arrays[0].map(function(_,i){
            return arrays.map(function(array){return array[i]})
        });
    }
    
    // > zip([[1,2],[11,22],[111,222]])
    // [[1,11,111],[2,22,222]]]
    
    // If you believe the following is a valid return value:
    //   > zip([])
    //   []
    // then you can special-case it, or just do
    //  return arrays.length==0 ? [] : arrays[0].map(...)
    

    The above assumes that the arrays are of equal size, as they should be. It also assumes you pass in a single list of lists argument, unlike Python"s version where the argument list is variadic. If you want all of these "features", see below. It takes just about 2 extra lines of code.

    The following will mimic Python"s zip behavior on edge cases where the arrays are not of equal size, silently pretending the longer parts of arrays don"t exist:

    function zip() {
        var args = [].slice.call(arguments);
        var shortest = args.length==0 ? [] : args.reduce(function(a,b){
            return a.length<b.length ? a : b
        });
    
        return shortest.map(function(_,i){
            return args.map(function(array){return array[i]})
        });
    }
    
    // > zip([1,2],[11,22],[111,222,333])
    // [[1,11,111],[2,22,222]]]
    
    // > zip()
    // []
    

    This will mimic Python"s itertools.zip_longest behavior, inserting undefined where arrays are not defined:

    function zip() {
        var args = [].slice.call(arguments);
        var longest = args.reduce(function(a,b){
            return a.length>b.length ? a : b
        }, []);
    
        return longest.map(function(_,i){
            return args.map(function(array){return array[i]})
        });
    }
    
    // > zip([1,2],[11,22],[111,222,333])
    // [[1,11,111],[2,22,222],[null,null,333]]
    
    // > zip()
    // []
    

    If you use these last two version (variadic aka. multiple-argument versions), then zip is no longer its own inverse. To mimic the zip(*[...]) idiom from Python, you will need to do zip.apply(this, [...]) when you want to invert the zip function or if you want to similarly have a variable number of lists as input.


    addendum:

    To make this handle any iterable (e.g. in Python you can use zip on strings, ranges, map objects, etc.), you could define the following:

    function iterView(iterable) {
        // returns an array equivalent to the iterable
    }
    

    However if you write zip in the following way, even that won"t be necessary:

    function zip(arrays) {
        return Array.apply(null,Array(arrays[0].length)).map(function(_,i){
            return arrays.map(function(array){return array[i]})
        });
    }
    

    Demo:

    > JSON.stringify( zip(["abcde",[1,2,3,4,5]]) )
    [["a",1],["b",2],["c",3],["d",4],["e",5]]
    

    (Or you could use a range(...) Python-style function if you"ve written one already. Eventually you will be able to use ECMAScript array comprehensions or generators.)

    What blocks Ruby, Python to get Javascript V8 speed?

    Are there any Ruby / Python features that are blocking implementation of optimizations (e.g. inline caching) V8 engine has?

    Python is co-developed by Google guys so it shouldn"t be blocked by software patents.

    Or this is rather matter of resources put into the V8 project by Google.

    Answer #1:

    What blocks Ruby, Python to get Javascript V8 speed?

    Nothing.

    Well, okay: money. (And time, people, resources, but if you have money, you can buy those.)

    V8 has a team of brilliant, highly-specialized, highly-experienced (and thus highly-paid) engineers working on it, that have decades of experience (I"m talking individually – collectively it"s more like centuries) in creating high-performance execution engines for dynamic OO languages. They are basically the same people who also created the Sun HotSpot JVM (among many others).

    Lars Bak, the lead developer, has been literally working on VMs for 25 years (and all of those VMs have lead up to V8), which is basically his entire (professional) life. Some of the people writing Ruby VMs aren"t even 25 years old.

    Are there any Ruby / Python features that are blocking implementation of optimizations (e.g. inline caching) V8 engine has?

    Given that at least IronRuby, JRuby, MagLev, MacRuby and Rubinius have either monomorphic (IronRuby) or polymorphic inline caching, the answer is obviously no.

    Modern Ruby implementations already do a great deal of optimizations. For example, for certain operations, Rubinius"s Hash class is faster than YARV"s. Now, this doesn"t sound terribly exciting until you realize that Rubinius"s Hash class is implemented in 100% pure Ruby, while YARV"s is implemented in 100% hand-optimized C.

    So, at least in some cases, Rubinius can generate better code than GCC!

    Or this is rather matter of resources put into the V8 project by Google.

    Yes. Not just Google. The lineage of V8"s source code is 25 years old now. The people who are working on V8 also created the Self VM (to this day one of the fastest dynamic OO language execution engines ever created), the Animorphic Smalltalk VM (to this day one of the fastest Smalltalk execution engines ever created), the HotSpot JVM (the fastest JVM ever created, probably the fastest VM period) and OOVM (one of the most efficient Smalltalk VMs ever created).

    In fact, Lars Bak, the lead developer of V8, worked on every single one of those, plus a few others.

    Django Template Variables and Javascript

    When I render a page using the Django template renderer, I can pass in a dictionary variable containing various values to manipulate them in the page using {{ myVar }}.

    Is there a way to access the same variable in Javascript (perhaps using the DOM, I don"t know how Django makes the variables accessible)? I want to be able to lookup details using an AJAX lookup based on the values contained in the variables passed in.

    Answer #1:

    The {{variable}} is substituted directly into the HTML. Do a view source; it isn"t a "variable" or anything like it. It"s just rendered text.

    Having said that, you can put this kind of substitution into your JavaScript.

    <script type="text/javascript"> 
       var a = "{{someDjangoVariable}}";
    </script>
    

    This gives you "dynamic" javascript.

    Web-scraping JavaScript page with Python

    I"m trying to develop a simple web scraper. I want to extract text without the HTML code. In fact, I achieve this goal, but I have seen that in some pages where JavaScript is loaded I didn"t obtain good results.

    For example, if some JavaScript code adds some text, I can"t see it, because when I call

    response = urllib2.urlopen(request)
    

    I get the original text without the added one (because JavaScript is executed in the client).

    So, I"m looking for some ideas to solve this problem.

    Answer #1:

    EDIT 30/Dec/2017: This answer appears in top results of Google searches, so I decided to update it. The old answer is still at the end.

    dryscape isn"t maintained anymore and the library dryscape developers recommend is Python 2 only. I have found using Selenium"s python library with Phantom JS as a web driver fast enough and easy to get the work done.

    Once you have installed Phantom JS, make sure the phantomjs binary is available in the current path:

    phantomjs --version
    # result:
    2.1.1
    

    Example

    To give an example, I created a sample page with following HTML code. (link):

    <!DOCTYPE html>
    <html>
    <head>
      <meta charset="utf-8">
      <title>Javascript scraping test</title>
    </head>
    <body>
      <p id="intro-text">No javascript support</p>
      <script>
         document.getElementById("intro-text").innerHTML = "Yay! Supports javascript";
      </script> 
    </body>
    </html>
    

    without javascript it says: No javascript support and with javascript: Yay! Supports javascript

    Scraping without JS support:

    import requests
    from bs4 import BeautifulSoup
    response = requests.get(my_url)
    soup = BeautifulSoup(response.text)
    soup.find(id="intro-text")
    # Result:
    <p id="intro-text">No javascript support</p>
    

    Scraping with JS support:

    from selenium import webdriver
    driver = webdriver.PhantomJS()
    driver.get(my_url)
    p_element = driver.find_element_by_id(id_="intro-text")
    print(p_element.text)
    # result:
    "Yay! Supports javascript"
    

    You can also use Python library dryscrape to scrape javascript driven websites.

    Scraping with JS support:

    import dryscrape
    from bs4 import BeautifulSoup
    session = dryscrape.Session()
    session.visit(my_url)
    response = session.body()
    soup = BeautifulSoup(response)
    soup.find(id="intro-text")
    # Result:
    <p id="intro-text">Yay! Supports javascript</p>
    

    What To Learn Java Or Javascript First: StackOverflow Questions

    Why is it string.join(list) instead of list.join(string)?

    Question by Evan Fosmark

    This has always confused me. It seems like this would be nicer:

    my_list = ["Hello", "world"]
    print(my_list.join("-"))
    # Produce: "Hello-world"
    

    Than this:

    my_list = ["Hello", "world"]
    print("-".join(my_list))
    # Produce: "Hello-world"
    

    Is there a specific reason it is like this?

    Answer #1:

    It"s because any iterable can be joined (e.g, list, tuple, dict, set), but its contents and the "joiner" must be strings.

    For example:

    "_".join(["welcome", "to", "stack", "overflow"])
    "_".join(("welcome", "to", "stack", "overflow"))
    
    "welcome_to_stack_overflow"
    

    Using something other than strings will raise the following error:

    TypeError: sequence item 0: expected str instance, int found
    

    Answer #2:

    This was discussed in the String methods... finally thread in the Python-Dev achive, and was accepted by Guido. This thread began in Jun 1999, and str.join was included in Python 1.6 which was released in Sep 2000 (and supported Unicode). Python 2.0 (supported str methods including join) was released in Oct 2000.

    • There were four options proposed in this thread:
      • str.join(seq)
      • seq.join(str)
      • seq.reduce(str)
      • join as a built-in function
    • Guido wanted to support not only lists and tuples, but all sequences/iterables.
    • seq.reduce(str) is difficult for newcomers.
    • seq.join(str) introduces unexpected dependency from sequences to str/unicode.
    • join() as a built-in function would support only specific data types. So using a built-in namespace is not good. If join() supports many datatypes, creating an optimized implementation would be difficult, if implemented using the __add__ method then it would ve O(n¬≤).
    • The separator string (sep) should not be omitted. Explicit is better than implicit.

    Here are some additional thoughts (my own, and my friend"s):

    • Unicode support was coming, but it was not final. At that time UTF-8 was the most likely about to replace UCS2/4. To calculate total buffer length of UTF-8 strings it needs to know character coding rule.
    • At that time, Python had already decided on a common sequence interface rule where a user could create a sequence-like (iterable) class. But Python didn"t support extending built-in types until 2.2. At that time it was difficult to provide basic iterable class (which is mentioned in another comment).

    Guido"s decision is recorded in a historical mail, deciding on str.join(seq):

    Funny, but it does seem right! Barry, go for it...
    Guido van Rossum

    Answer #3:

    Because the join() method is in the string class, instead of the list class?

    I agree it looks funny.

    See http://www.faqs.org/docs/diveintopython/odbchelper_join.html:

    Historical note. When I first learned Python, I expected join to be a method of a list, which would take the delimiter as an argument. Lots of people feel the same way, and there’s a story behind the join method. Prior to Python 1.6, strings didn’t have all these useful methods. There was a separate string module which contained all the string functions; each function took a string as its first argument. The functions were deemed important enough to put onto the strings themselves, which made sense for functions like lower, upper, and split. But many hard-core Python programmers objected to the new join method, arguing that it should be a method of the list instead, or that it shouldn’t move at all but simply stay a part of the old string module (which still has lots of useful stuff in it). I use the new join method exclusively, but you will see code written either way, and if it really bothers you, you can use the old string.join function instead.

    --- Mark Pilgrim, Dive into Python

    join list of lists in python

    Question by Kozyarchuk

    Is the a short syntax for joining a list of lists into a single list( or iterator) in python?

    For example I have a list as follows and I want to iterate over a,b and c.

    x = [["a";"b"], ["c"]]
    

    The best I can come up with is as follows.

    result = []
    [ result.extend(el) for el in x] 
    
    for el in result:
      print el
    

    Answer #1:

    import itertools
    a = [["a","b"], ["c"]]
    print(list(itertools.chain.from_iterable(a)))
    

    Answer #2:

    x = [["a";"b"], ["c"]]
    
    result = sum(x, [])
    

    What To Learn Java Or Javascript First: StackOverflow Questions

    Python"s equivalent of && (logical-and) in an if-statement

    Question by delete

    Here"s my code:

    def front_back(a, b):
      # +++your code here+++
      if len(a) % 2 == 0 && len(b) % 2 == 0:
        return a[:(len(a)/2)] + b[:(len(b)/2)] + a[(len(a)/2):] + b[(len(b)/2):] 
      else:
        #todo! Not yet done. :P
      return
    

    I"m getting an error in the IF conditional.
    What am I doing wrong?

    Answer #1:

    You would want and instead of &&.

    Answer #2:

    Python uses and and or conditionals.

    i.e.

    if foo == "abc" and bar == "bac" or zoo == "123":
      # do something
    

    Answer #3:

    I"m getting an error in the IF conditional. What am I doing wrong?

    There reason that you get a SyntaxError is that there is no && operator in Python. Likewise || and ! are not valid Python operators.

    Some of the operators you may know from other languages have a different name in Python. The logical operators && and || are actually called and and or. Likewise the logical negation operator ! is called not.

    So you could just write:

    if len(a) % 2 == 0 and len(b) % 2 == 0:
    

    or even:

    if not (len(a) % 2 or len(b) % 2):
    

    Some additional information (that might come in handy):

    I summarized the operator "equivalents" in this table:

    +------------------------------+---------------------+
    |  Operator (other languages)  |  Operator (Python)  |
    +==============================+=====================+
    |              &&              |         and         |
    +------------------------------+---------------------+
    |              ||              |         or          |
    +------------------------------+---------------------+
    |              !               |         not         |
    +------------------------------+---------------------+
    

    See also Python documentation: 6.11. Boolean operations.

    Besides the logical operators Python also has bitwise/binary operators:

    +--------------------+--------------------+
    |  Logical operator  |  Bitwise operator  |
    +====================+====================+
    |        and         |         &          |
    +--------------------+--------------------+
    |         or         |         |          |
    +--------------------+--------------------+
    

    There is no bitwise negation in Python (just the bitwise inverse operator ~ - but that is not equivalent to not).

    See also 6.6. Unary arithmetic and bitwise/binary operations and 6.7. Binary arithmetic operations.

    The logical operators (like in many other languages) have the advantage that these are short-circuited. That means if the first operand already defines the result, then the second operator isn"t evaluated at all.

    To show this I use a function that simply takes a value, prints it and returns it again. This is handy to see what is actually evaluated because of the print statements:

    >>> def print_and_return(value):
    ...     print(value)
    ...     return value
    
    >>> res = print_and_return(False) and print_and_return(True)
    False
    

    As you can see only one print statement is executed, so Python really didn"t even look at the right operand.

    This is not the case for the binary operators. Those always evaluate both operands:

    >>> res = print_and_return(False) & print_and_return(True);
    False
    True
    

    But if the first operand isn"t enough then, of course, the second operator is evaluated:

    >>> res = print_and_return(True) and print_and_return(False);
    True
    False
    

    To summarize this here is another Table:

    +-----------------+-------------------------+
    |   Expression    |  Right side evaluated?  |
    +=================+=========================+
    | `True` and ...  |           Yes           |
    +-----------------+-------------------------+
    | `False` and ... |           No            |
    +-----------------+-------------------------+
    |  `True` or ...  |           No            |
    +-----------------+-------------------------+
    | `False` or ...  |           Yes           |
    +-----------------+-------------------------+
    

    The True and False represent what bool(left-hand-side) returns, they don"t have to be True or False, they just need to return True or False when bool is called on them (1).

    So in Pseudo-Code(!) the and and or functions work like these:

    def and(expr1, expr2):
        left = evaluate(expr1)
        if bool(left):
            return evaluate(expr2)
        else:
            return left
    
    def or(expr1, expr2):
        left = evaluate(expr1)
        if bool(left):
            return left
        else:
            return evaluate(expr2)
    

    Note that this is pseudo-code not Python code. In Python you cannot create functions called and or or because these are keywords. Also you should never use "evaluate" or if bool(...).

    Customizing the behavior of your own classes

    This implicit bool call can be used to customize how your classes behave with and, or and not.

    To show how this can be customized I use this class which again prints something to track what is happening:

    class Test(object):
        def __init__(self, value):
            self.value = value
    
        def __bool__(self):
            print("__bool__ called on {!r}".format(self))
            return bool(self.value)
    
        __nonzero__ = __bool__  # Python 2 compatibility
    
        def __repr__(self):
            return "{self.__class__.__name__}({self.value})".format(self=self)
    

    So let"s see what happens with that class in combination with these operators:

    >>> if Test(True) and Test(False):
    ...     pass
    __bool__ called on Test(True)
    __bool__ called on Test(False)
    
    >>> if Test(False) or Test(False):
    ...     pass
    __bool__ called on Test(False)
    __bool__ called on Test(False)
    
    >>> if not Test(True):
    ...     pass
    __bool__ called on Test(True)
    

    If you don"t have a __bool__ method then Python also checks if the object has a __len__ method and if it returns a value greater than zero. That might be useful to know in case you create a sequence container.

    See also 4.1. Truth Value Testing.

    NumPy arrays and subclasses

    Probably a bit beyond the scope of the original question but in case you"re dealing with NumPy arrays or subclasses (like Pandas Series or DataFrames) then the implicit bool call will raise the dreaded ValueError:

    >>> import numpy as np
    >>> arr = np.array([1,2,3])
    >>> bool(arr)
    ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
    >>> arr and arr
    ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
    
    >>> import pandas as pd
    >>> s = pd.Series([1,2,3])
    >>> bool(s)
    ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
    >>> s and s
    ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
    

    In these cases you can use the logical and function from NumPy which performs an element-wise and (or or):

    >>> np.logical_and(np.array([False,False,True,True]), np.array([True, False, True, False]))
    array([False, False,  True, False])
    >>> np.logical_or(np.array([False,False,True,True]), np.array([True, False, True, False]))
    array([ True, False,  True,  True])
    

    If you"re dealing just with boolean arrays you could also use the binary operators with NumPy, these do perform element-wise (but also binary) comparisons:

    >>> np.array([False,False,True,True]) & np.array([True, False, True, False])
    array([False, False,  True, False])
    >>> np.array([False,False,True,True]) | np.array([True, False, True, False])
    array([ True, False,  True,  True])
    

    (1)

    That the bool call on the operands has to return True or False isn"t completely correct. It"s just the first operand that needs to return a boolean in it"s __bool__ method:

    class Test(object):
        def __init__(self, value):
            self.value = value
    
        def __bool__(self):
            return self.value
    
        __nonzero__ = __bool__  # Python 2 compatibility
    
        def __repr__(self):
            return "{self.__class__.__name__}({self.value})".format(self=self)
    
    >>> x = Test(10) and Test(10)
    TypeError: __bool__ should return bool, returned int
    >>> x1 = Test(True) and Test(10)
    >>> x2 = Test(False) and Test(10)
    

    That"s because and actually returns the first operand if the first operand evaluates to False and if it evaluates to True then it returns the second operand:

    >>> x1
    Test(10)
    >>> x2
    Test(False)
    

    Similarly for or but just the other way around:

    >>> Test(True) or Test(10)
    Test(True)
    >>> Test(False) or Test(10)
    Test(10)
    

    However if you use them in an if statement the if will also implicitly call bool on the result. So these finer points may not be relevant for you.

    How do you get the logical xor of two variables in Python?

    Question by Zach Hirsch

    How do you get the logical xor of two variables in Python?

    For example, I have two variables that I expect to be strings. I want to test that only one of them contains a True value (is not None or the empty string):

    str1 = raw_input("Enter string one:")
    str2 = raw_input("Enter string two:")
    if logical_xor(str1, str2):
        print "ok"
    else:
        print "bad"
    

    The ^ operator seems to be bitwise, and not defined on all objects:

    >>> 1 ^ 1
    0
    >>> 2 ^ 1
    3
    >>> "abc" ^ ""
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    TypeError: unsupported operand type(s) for ^: "str" and "str"
    

    Answer #1:

    If you"re already normalizing the inputs to booleans, then != is xor.

    bool(a) != bool(b)
    

    Answer #2:

    You can always use the definition of xor to compute it from other logical operations:

    (a and not b) or (not a and b)
    

    But this is a little too verbose for me, and isn"t particularly clear at first glance. Another way to do it is:

    bool(a) ^ bool(b)
    

    The xor operator on two booleans is logical xor (unlike on ints, where it"s bitwise). Which makes sense, since bool is just a subclass of int, but is implemented to only have the values 0 and 1. And logical xor is equivalent to bitwise xor when the domain is restricted to 0 and 1.

    So the logical_xor function would be implemented like:

    def logical_xor(str1, str2):
        return bool(str1) ^ bool(str2)
    

    Credit to Nick Coghlan on the Python-3000 mailing list.

    What To Learn Java Or Javascript First: StackOverflow Questions

    Meaning of @classmethod and @staticmethod for beginner?

    Question by user1632861

    Could someone explain to me the meaning of @classmethod and @staticmethod in python? I need to know the difference and the meaning.

    As far as I understand, @classmethod tells a class that it"s a method which should be inherited into subclasses, or... something. However, what"s the point of that? Why not just define the class method without adding @classmethod or @staticmethod or any @ definitions?

    tl;dr: when should I use them, why should I use them, and how should I use them?

    Answer #1:

    Though classmethod and staticmethod are quite similar, there"s a slight difference in usage for both entities: classmethod must have a reference to a class object as the first parameter, whereas staticmethod can have no parameters at all.

    Example

    class Date(object):
    
        def __init__(self, day=0, month=0, year=0):
            self.day = day
            self.month = month
            self.year = year
    
        @classmethod
        def from_string(cls, date_as_string):
            day, month, year = map(int, date_as_string.split("-"))
            date1 = cls(day, month, year)
            return date1
    
        @staticmethod
        def is_date_valid(date_as_string):
            day, month, year = map(int, date_as_string.split("-"))
            return day <= 31 and month <= 12 and year <= 3999
    
    date2 = Date.from_string("11-09-2012")
    is_date = Date.is_date_valid("11-09-2012")
    

    Explanation

    Let"s assume an example of a class, dealing with date information (this will be our boilerplate):

    class Date(object):
    
        def __init__(self, day=0, month=0, year=0):
            self.day = day
            self.month = month
            self.year = year
    

    This class obviously could be used to store information about certain dates (without timezone information; let"s assume all dates are presented in UTC).

    Here we have __init__, a typical initializer of Python class instances, which receives arguments as a typical instancemethod, having the first non-optional argument (self) that holds a reference to a newly created instance.

    Class Method

    We have some tasks that can be nicely done using classmethods.

    Let"s assume that we want to create a lot of Date class instances having date information coming from an outer source encoded as a string with format "dd-mm-yyyy". Suppose we have to do this in different places in the source code of our project.

    So what we must do here is:

    1. Parse a string to receive day, month and year as three integer variables or a 3-item tuple consisting of that variable.
    2. Instantiate Date by passing those values to the initialization call.

    This will look like:

    day, month, year = map(int, string_date.split("-"))
    date1 = Date(day, month, year)
    

    For this purpose, C++ can implement such a feature with overloading, but Python lacks this overloading. Instead, we can use classmethod. Let"s create another "constructor".

        @classmethod
        def from_string(cls, date_as_string):
            day, month, year = map(int, date_as_string.split("-"))
            date1 = cls(day, month, year)
            return date1
    
    date2 = Date.from_string("11-09-2012")
    

    Let"s look more carefully at the above implementation, and review what advantages we have here:

    1. We"ve implemented date string parsing in one place and it"s reusable now.
    2. Encapsulation works fine here (if you think that you could implement string parsing as a single function elsewhere, this solution fits the OOP paradigm far better).
    3. cls is an object that holds the class itself, not an instance of the class. It"s pretty cool because if we inherit our Date class, all children will have from_string defined also.

    Static method

    What about staticmethod? It"s pretty similar to classmethod but doesn"t take any obligatory parameters (like a class method or instance method does).

    Let"s look at the next use case.

    We have a date string that we want to validate somehow. This task is also logically bound to the Date class we"ve used so far, but doesn"t require instantiation of it.

    Here is where staticmethod can be useful. Let"s look at the next piece of code:

        @staticmethod
        def is_date_valid(date_as_string):
            day, month, year = map(int, date_as_string.split("-"))
            return day <= 31 and month <= 12 and year <= 3999
    
        # usage:
        is_date = Date.is_date_valid("11-09-2012")
    

    So, as we can see from usage of staticmethod, we don"t have any access to what the class is---it"s basically just a function, called syntactically like a method, but without access to the object and its internals (fields and another methods), while classmethod does.

    Answer #2:

    Rostyslav Dzinko"s answer is very appropriate. I thought I could highlight one other reason you should choose @classmethod over @staticmethod when you are creating an additional constructor.

    In the example above, Rostyslav used the @classmethod from_string as a Factory to create Date objects from otherwise unacceptable parameters. The same can be done with @staticmethod as is shown in the code below:

    class Date:
      def __init__(self, month, day, year):
        self.month = month
        self.day   = day
        self.year  = year
    
    
      def display(self):
        return "{0}-{1}-{2}".format(self.month, self.day, self.year)
    
    
      @staticmethod
      def millenium(month, day):
        return Date(month, day, 2000)
    
    new_year = Date(1, 1, 2013)               # Creates a new Date object
    millenium_new_year = Date.millenium(1, 1) # also creates a Date object. 
    
    # Proof:
    new_year.display()           # "1-1-2013"
    millenium_new_year.display() # "1-1-2000"
    
    isinstance(new_year, Date) # True
    isinstance(millenium_new_year, Date) # True
    

    Thus both new_year and millenium_new_year are instances of the Date class.

    But, if you observe closely, the Factory process is hard-coded to create Date objects no matter what. What this means is that even if the Date class is subclassed, the subclasses will still create plain Date objects (without any properties of the subclass). See that in the example below:

    class DateTime(Date):
      def display(self):
          return "{0}-{1}-{2} - 00:00:00PM".format(self.month, self.day, self.year)
    
    
    datetime1 = DateTime(10, 10, 1990)
    datetime2 = DateTime.millenium(10, 10)
    
    isinstance(datetime1, DateTime) # True
    isinstance(datetime2, DateTime) # False
    
    datetime1.display() # returns "10-10-1990 - 00:00:00PM"
    datetime2.display() # returns "10-10-2000" because it"s not a DateTime object but a Date object. Check the implementation of the millenium method on the Date class for more details.
    

    datetime2 is not an instance of DateTime? WTF? Well, that"s because of the @staticmethod decorator used.

    In most cases, this is undesired. If what you want is a Factory method that is aware of the class that called it, then @classmethod is what you need.

    Rewriting Date.millenium as (that"s the only part of the above code that changes):

    @classmethod
    def millenium(cls, month, day):
        return cls(month, day, 2000)
    

    ensures that the class is not hard-coded but rather learnt. cls can be any subclass. The resulting object will rightly be an instance of cls.
    Let"s test that out:

    datetime1 = DateTime(10, 10, 1990)
    datetime2 = DateTime.millenium(10, 10)
    
    isinstance(datetime1, DateTime) # True
    isinstance(datetime2, DateTime) # True
    
    
    datetime1.display() # "10-10-1990 - 00:00:00PM"
    datetime2.display() # "10-10-2000 - 00:00:00PM"
    

    The reason is, as you know by now, that @classmethod was used instead of @staticmethod

    Answer #3:

    @classmethod means: when this method is called, we pass the class as the first argument instead of the instance of that class (as we normally do with methods). This means you can use the class and its properties inside that method rather than a particular instance.

    @staticmethod means: when this method is called, we don"t pass an instance of the class to it (as we normally do with methods). This means you can put a function inside a class but you can"t access the instance of that class (this is useful when your method does not use the instance).

    What is the meaning of single and double underscore before an object name?

    Can someone please explain the exact meaning of having single and double leading underscores before an object"s name in Python, and the difference between both?

    Also, does that meaning stay the same regardless of whether the object in question is a variable, a function, a method, etc.?

    Answer #1:

    Single Underscore

    Names, in a class, with a leading underscore are simply to indicate to other programmers that the attribute or method is intended to be private. However, nothing special is done with the name itself.

    To quote PEP-8:

    _single_leading_underscore: weak "internal use" indicator. E.g. from M import * does not import objects whose name starts with an underscore.

    Double Underscore (Name Mangling)

    From the Python docs:

    Any identifier of the form __spam (at least two leading underscores, at most one trailing underscore) is textually replaced with _classname__spam, where classname is the current class name with leading underscore(s) stripped. This mangling is done without regard to the syntactic position of the identifier, so it can be used to define class-private instance and class variables, methods, variables stored in globals, and even variables stored in instances. private to this class on instances of other classes.

    And a warning from the same page:

    Name mangling is intended to give classes an easy way to define “private” instance variables and methods, without having to worry about instance variables defined by derived classes, or mucking with instance variables by code outside the class. Note that the mangling rules are designed mostly to avoid accidents; it still is possible for a determined soul to access or modify a variable that is considered private.

    Example

    >>> class MyClass():
    ...     def __init__(self):
    ...             self.__superprivate = "Hello"
    ...             self._semiprivate = ", world!"
    ...
    >>> mc = MyClass()
    >>> print mc.__superprivate
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    AttributeError: myClass instance has no attribute "__superprivate"
    >>> print mc._semiprivate
    , world!
    >>> print mc.__dict__
    {"_MyClass__superprivate": "Hello", "_semiprivate": ", world!"}
    

    Answer #2:

    __foo__: this is just a convention, a way for the Python system to use names that won"t conflict with user names.

    _foo: this is just a convention, a way for the programmer to indicate that the variable is private (whatever that means in Python).

    __foo: this has real meaning: the interpreter replaces this name with _classname__foo as a way to ensure that the name will not overlap with a similar name in another class.

    No other form of underscores have meaning in the Python world.

    There"s no difference between class, variable, global, etc in these conventions.

    What To Learn Java Or Javascript First: StackOverflow Questions

    Finding median of list in Python

    How do you find the median of a list in Python? The list can be of any size and the numbers are not guaranteed to be in any particular order.

    If the list contains an even number of elements, the function should return the average of the middle two.

    Here are some examples (sorted for display purposes):

    median([1]) == 1
    median([1, 1]) == 1
    median([1, 1, 2, 4]) == 1.5
    median([0, 2, 5, 6, 8, 9, 9]) == 6
    median([0, 0, 0, 0, 4, 4, 6, 8]) == 2
    

    Answer #1:

    Python 3.4 has statistics.median:

    Return the median (middle value) of numeric data.

    When the number of data points is odd, return the middle data point. When the number of data points is even, the median is interpolated by taking the average of the two middle values:

    >>> median([1, 3, 5])
    3
    >>> median([1, 3, 5, 7])
    4.0
    

    Usage:

    import statistics
    
    items = [6, 1, 8, 2, 3]
    
    statistics.median(items)
    #>>> 3
    

    It"s pretty careful with types, too:

    statistics.median(map(float, items))
    #>>> 3.0
    
    from decimal import Decimal
    statistics.median(map(Decimal, items))
    #>>> Decimal("3")
    

    Answer #2:

    (Works with ):

    def median(lst):
        n = len(lst)
        s = sorted(lst)
        return (sum(s[n//2-1:n//2+1])/2.0, s[n//2])[n % 2] if n else None
    

    >>> median([-5, -5, -3, -4, 0, -1])
    -3.5
    

    numpy.median():

    >>> from numpy import median
    >>> median([1, -4, -1, -1, 1, -3])
    -1.0
    

    For , use statistics.median:

    >>> from statistics import median
    >>> median([5, 2, 3, 8, 9, -2])
    4.0
    

    What To Learn Java Or Javascript First: StackOverflow Questions

    How can I open multiple files using "with open" in Python?

    I want to change a couple of files at one time, iff I can write to all of them. I"m wondering if I somehow can combine the multiple open calls with the with statement:

    try:
      with open("a", "w") as a and open("b", "w") as b:
        do_something()
    except IOError as e:
      print "Operation failed: %s" % e.strerror
    

    If that"s not possible, what would an elegant solution to this problem look like?

    Answer #1:

    As of Python 2.7 (or 3.1 respectively) you can write

    with open("a", "w") as a, open("b", "w") as b:
        do_something()
    

    In earlier versions of Python, you can sometimes use contextlib.nested() to nest context managers. This won"t work as expected for opening multiples files, though -- see the linked documentation for details.


    In the rare case that you want to open a variable number of files all at the same time, you can use contextlib.ExitStack, starting from Python version 3.3:

    with ExitStack() as stack:
        files = [stack.enter_context(open(fname)) for fname in filenames]
        # Do something with "files"
    

    Most of the time you have a variable set of files, you likely want to open them one after the other, though.

    Answer #2:

    For opening many files at once or for long file paths, it may be useful to break things up over multiple lines. From the Python Style Guide as suggested by @Sven Marnach in comments to another answer:

    with open("/path/to/InFile.ext", "r") as file_1, 
         open("/path/to/OutFile.ext", "w") as file_2:
        file_2.write(file_1.read())
    

    open() in Python does not create a file if it doesn"t exist

    What is the best way to open a file as read/write if it exists, or if it does not, then create it and open it as read/write? From what I read, file = open("myfile.dat", "rw") should do this, right?

    It is not working for me (Python 2.6.2) and I"m wondering if it is a version problem, or not supposed to work like that or what.

    The bottom line is, I just need a solution for the problem. I am curious about the other stuff, but all I need is a nice way to do the opening part.

    The enclosing directory was writeable by user and group, not other (I"m on a Linux system... so permissions 775 in other words), and the exact error was:

    IOError: no such file or directory.

    Answer #1:

    You should use open with the w+ mode:

    file = open("myfile.dat", "w+")
    

    Answer #2:

    The advantage of the following approach is that the file is properly closed at the block"s end, even if an exception is raised on the way. It"s equivalent to try-finally, but much shorter.

    with open("file.dat";"a+") as f:
        f.write(...)
        ...
    

    a+ Opens a file for both appending and reading. The file pointer is at the end of the file if the file exists. The file opens in the append mode. If the file does not exist, it creates a new file for reading and writing. -Python file modes

    seek() method sets the file"s current position.

    f.seek(pos [, (0|1|2)])
    pos .. position of the r/w pointer
    [] .. optionally
    () .. one of ->
      0 .. absolute position
      1 .. relative position to current
      2 .. relative position from end
    

    Only "rwab+" characters are allowed; there must be exactly one of "rwa" - see Stack Overflow question Python file modes detail.

    Difference between modes a, a+, w, w+, and r+ in built-in open function?

    In the python built-in open function, what is the exact difference between the modes w, a, w+, a+, and r+?

    In particular, the documentation implies that all of these will allow writing to the file, and says that it opens the files for "appending", "writing", and "updating" specifically, but does not define what these terms mean.

    Answer #1:

    The opening modes are exactly the same as those for the C standard library function fopen().

    The BSD fopen manpage defines them as follows:

     The argument mode points to a string beginning with one of the following
     sequences (Additional characters may follow these sequences.):
    
     ``r""   Open text file for reading.  The stream is positioned at the
             beginning of the file.
    
     ``r+""  Open for reading and writing.  The stream is positioned at the
             beginning of the file.
    
     ``w""   Truncate file to zero length or create text file for writing.
             The stream is positioned at the beginning of the file.
    
     ``w+""  Open for reading and writing.  The file is created if it does not
             exist, otherwise it is truncated.  The stream is positioned at
             the beginning of the file.
    
     ``a""   Open for writing.  The file is created if it does not exist.  The
             stream is positioned at the end of the file.  Subsequent writes
             to the file will always end up at the then current end of file,
             irrespective of any intervening fseek(3) or similar.
    
     ``a+""  Open for reading and writing.  The file is created if it does not
             exist.  The stream is positioned at the end of the file.  Subse-
             quent writes to the file will always end up at the then current
             end of file, irrespective of any intervening fseek(3) or similar.
    

    What To Learn Java Or Javascript First: StackOverflow Questions

    How do I merge two dictionaries in a single expression (taking union of dictionaries)?

    Question by Carl Meyer

    I have two Python dictionaries, and I want to write a single expression that returns these two dictionaries, merged (i.e. taking the union). The update() method would be what I need, if it returned its result instead of modifying a dictionary in-place.

    >>> x = {"a": 1, "b": 2}
    >>> y = {"b": 10, "c": 11}
    >>> z = x.update(y)
    >>> print(z)
    None
    >>> x
    {"a": 1, "b": 10, "c": 11}
    

    How can I get that final merged dictionary in z, not x?

    (To be extra-clear, the last-one-wins conflict-handling of dict.update() is what I"m looking for as well.)

    Answer #1:

    How can I merge two Python dictionaries in a single expression?

    For dictionaries x and y, z becomes a shallowly-merged dictionary with values from y replacing those from x.

    • In Python 3.9.0 or greater (released 17 October 2020): PEP-584, discussed here, was implemented and provides the simplest method:

      z = x | y          # NOTE: 3.9+ ONLY
      
    • In Python 3.5 or greater:

      z = {**x, **y}
      
    • In Python 2, (or 3.4 or lower) write a function:

      def merge_two_dicts(x, y):
          z = x.copy()   # start with keys and values of x
          z.update(y)    # modifies z with keys and values of y
          return z
      

      and now:

      z = merge_two_dicts(x, y)
      

    Explanation

    Say you have two dictionaries and you want to merge them into a new dictionary without altering the original dictionaries:

    x = {"a": 1, "b": 2}
    y = {"b": 3, "c": 4}
    

    The desired result is to get a new dictionary (z) with the values merged, and the second dictionary"s values overwriting those from the first.

    >>> z
    {"a": 1, "b": 3, "c": 4}
    

    A new syntax for this, proposed in PEP 448 and available as of Python 3.5, is

    z = {**x, **y}
    

    And it is indeed a single expression.

    Note that we can merge in with literal notation as well:

    z = {**x, "foo": 1, "bar": 2, **y}
    

    and now:

    >>> z
    {"a": 1, "b": 3, "foo": 1, "bar": 2, "c": 4}
    

    It is now showing as implemented in the release schedule for 3.5, PEP 478, and it has now made its way into the What"s New in Python 3.5 document.

    However, since many organizations are still on Python 2, you may wish to do this in a backward-compatible way. The classically Pythonic way, available in Python 2 and Python 3.0-3.4, is to do this as a two-step process:

    z = x.copy()
    z.update(y) # which returns None since it mutates z
    

    In both approaches, y will come second and its values will replace x"s values, thus b will point to 3 in our final result.

    Not yet on Python 3.5, but want a single expression

    If you are not yet on Python 3.5 or need to write backward-compatible code, and you want this in a single expression, the most performant while the correct approach is to put it in a function:

    def merge_two_dicts(x, y):
        """Given two dictionaries, merge them into a new dict as a shallow copy."""
        z = x.copy()
        z.update(y)
        return z
    

    and then you have a single expression:

    z = merge_two_dicts(x, y)
    

    You can also make a function to merge an arbitrary number of dictionaries, from zero to a very large number:

    def merge_dicts(*dict_args):
        """
        Given any number of dictionaries, shallow copy and merge into a new dict,
        precedence goes to key-value pairs in latter dictionaries.
        """
        result = {}
        for dictionary in dict_args:
            result.update(dictionary)
        return result
    

    This function will work in Python 2 and 3 for all dictionaries. e.g. given dictionaries a to g:

    z = merge_dicts(a, b, c, d, e, f, g) 
    

    and key-value pairs in g will take precedence over dictionaries a to f, and so on.

    Critiques of Other Answers

    Don"t use what you see in the formerly accepted answer:

    z = dict(x.items() + y.items())
    

    In Python 2, you create two lists in memory for each dict, create a third list in memory with length equal to the length of the first two put together, and then discard all three lists to create the dict. In Python 3, this will fail because you"re adding two dict_items objects together, not two lists -

    >>> c = dict(a.items() + b.items())
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    TypeError: unsupported operand type(s) for +: "dict_items" and "dict_items"
    

    and you would have to explicitly create them as lists, e.g. z = dict(list(x.items()) + list(y.items())). This is a waste of resources and computation power.

    Similarly, taking the union of items() in Python 3 (viewitems() in Python 2.7) will also fail when values are unhashable objects (like lists, for example). Even if your values are hashable, since sets are semantically unordered, the behavior is undefined in regards to precedence. So don"t do this:

    >>> c = dict(a.items() | b.items())
    

    This example demonstrates what happens when values are unhashable:

    >>> x = {"a": []}
    >>> y = {"b": []}
    >>> dict(x.items() | y.items())
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    TypeError: unhashable type: "list"
    

    Here"s an example where y should have precedence, but instead the value from x is retained due to the arbitrary order of sets:

    >>> x = {"a": 2}
    >>> y = {"a": 1}
    >>> dict(x.items() | y.items())
    {"a": 2}
    

    Another hack you should not use:

    z = dict(x, **y)
    

    This uses the dict constructor and is very fast and memory-efficient (even slightly more so than our two-step process) but unless you know precisely what is happening here (that is, the second dict is being passed as keyword arguments to the dict constructor), it"s difficult to read, it"s not the intended usage, and so it is not Pythonic.

    Here"s an example of the usage being remediated in django.

    Dictionaries are intended to take hashable keys (e.g. frozensets or tuples), but this method fails in Python 3 when keys are not strings.

    >>> c = dict(a, **b)
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    TypeError: keyword arguments must be strings
    

    From the mailing list, Guido van Rossum, the creator of the language, wrote:

    I am fine with declaring dict({}, **{1:3}) illegal, since after all it is abuse of the ** mechanism.

    and

    Apparently dict(x, **y) is going around as "cool hack" for "call x.update(y) and return x". Personally, I find it more despicable than cool.

    It is my understanding (as well as the understanding of the creator of the language) that the intended usage for dict(**y) is for creating dictionaries for readability purposes, e.g.:

    dict(a=1, b=10, c=11)
    

    instead of

    {"a": 1, "b": 10, "c": 11}
    

    Response to comments

    Despite what Guido says, dict(x, **y) is in line with the dict specification, which btw. works for both Python 2 and 3. The fact that this only works for string keys is a direct consequence of how keyword parameters work and not a short-coming of dict. Nor is using the ** operator in this place an abuse of the mechanism, in fact, ** was designed precisely to pass dictionaries as keywords.

    Again, it doesn"t work for 3 when keys are not strings. The implicit calling contract is that namespaces take ordinary dictionaries, while users must only pass keyword arguments that are strings. All other callables enforced it. dict broke this consistency in Python 2:

    >>> foo(**{("a", "b"): None})
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    TypeError: foo() keywords must be strings
    >>> dict(**{("a", "b"): None})
    {("a", "b"): None}
    

    This inconsistency was bad given other implementations of Python (PyPy, Jython, IronPython). Thus it was fixed in Python 3, as this usage could be a breaking change.

    I submit to you that it is malicious incompetence to intentionally write code that only works in one version of a language or that only works given certain arbitrary constraints.

    More comments:

    dict(x.items() + y.items()) is still the most readable solution for Python 2. Readability counts.

    My response: merge_two_dicts(x, y) actually seems much clearer to me, if we"re actually concerned about readability. And it is not forward compatible, as Python 2 is increasingly deprecated.

    {**x, **y} does not seem to handle nested dictionaries. the contents of nested keys are simply overwritten, not merged [...] I ended up being burnt by these answers that do not merge recursively and I was surprised no one mentioned it. In my interpretation of the word "merging" these answers describe "updating one dict with another", and not merging.

    Yes. I must refer you back to the question, which is asking for a shallow merge of two dictionaries, with the first"s values being overwritten by the second"s - in a single expression.

    Assuming two dictionaries of dictionaries, one might recursively merge them in a single function, but you should be careful not to modify the dictionaries from either source, and the surest way to avoid that is to make a copy when assigning values. As keys must be hashable and are usually therefore immutable, it is pointless to copy them:

    from copy import deepcopy
    
    def dict_of_dicts_merge(x, y):
        z = {}
        overlapping_keys = x.keys() & y.keys()
        for key in overlapping_keys:
            z[key] = dict_of_dicts_merge(x[key], y[key])
        for key in x.keys() - overlapping_keys:
            z[key] = deepcopy(x[key])
        for key in y.keys() - overlapping_keys:
            z[key] = deepcopy(y[key])
        return z
    

    Usage:

    >>> x = {"a":{1:{}}, "b": {2:{}}}
    >>> y = {"b":{10:{}}, "c": {11:{}}}
    >>> dict_of_dicts_merge(x, y)
    {"b": {2: {}, 10: {}}, "a": {1: {}}, "c": {11: {}}}
    

    Coming up with contingencies for other value types is far beyond the scope of this question, so I will point you at my answer to the canonical question on a "Dictionaries of dictionaries merge".

    Less Performant But Correct Ad-hocs

    These approaches are less performant, but they will provide correct behavior. They will be much less performant than copy and update or the new unpacking because they iterate through each key-value pair at a higher level of abstraction, but they do respect the order of precedence (latter dictionaries have precedence)

    You can also chain the dictionaries manually inside a dict comprehension:

    {k: v for d in dicts for k, v in d.items()} # iteritems in Python 2.7
    

    or in Python 2.6 (and perhaps as early as 2.4 when generator expressions were introduced):

    dict((k, v) for d in dicts for k, v in d.items()) # iteritems in Python 2
    

    itertools.chain will chain the iterators over the key-value pairs in the correct order:

    from itertools import chain
    z = dict(chain(x.items(), y.items())) # iteritems in Python 2
    

    Performance Analysis

    I"m only going to do the performance analysis of the usages known to behave correctly. (Self-contained so you can copy and paste yourself.)

    from timeit import repeat
    from itertools import chain
    
    x = dict.fromkeys("abcdefg")
    y = dict.fromkeys("efghijk")
    
    def merge_two_dicts(x, y):
        z = x.copy()
        z.update(y)
        return z
    
    min(repeat(lambda: {**x, **y}))
    min(repeat(lambda: merge_two_dicts(x, y)))
    min(repeat(lambda: {k: v for d in (x, y) for k, v in d.items()}))
    min(repeat(lambda: dict(chain(x.items(), y.items()))))
    min(repeat(lambda: dict(item for d in (x, y) for item in d.items())))
    

    In Python 3.8.1, NixOS:

    >>> min(repeat(lambda: {**x, **y}))
    1.0804965235292912
    >>> min(repeat(lambda: merge_two_dicts(x, y)))
    1.636518670246005
    >>> min(repeat(lambda: {k: v for d in (x, y) for k, v in d.items()}))
    3.1779992282390594
    >>> min(repeat(lambda: dict(chain(x.items(), y.items()))))
    2.740647904574871
    >>> min(repeat(lambda: dict(item for d in (x, y) for item in d.items())))
    4.266070580109954
    
    $ uname -a
    Linux nixos 4.19.113 #1-NixOS SMP Wed Mar 25 07:06:15 UTC 2020 x86_64 GNU/Linux
    

    Resources on Dictionaries

    Answer #2:

    In your case, what you can do is:

    z = dict(list(x.items()) + list(y.items()))
    

    This will, as you want it, put the final dict in z, and make the value for key b be properly overridden by the second (y) dict"s value:

    >>> x = {"a":1, "b": 2}
    >>> y = {"b":10, "c": 11}
    >>> z = dict(list(x.items()) + list(y.items()))
    >>> z
    {"a": 1, "c": 11, "b": 10}
    
    

    If you use Python 2, you can even remove the list() calls. To create z:

    >>> z = dict(x.items() + y.items())
    >>> z
    {"a": 1, "c": 11, "b": 10}
    

    If you use Python version 3.9.0a4 or greater, then you can directly use:

    x = {"a":1, "b": 2}
    y = {"b":10, "c": 11}
    z = x | y
    print(z)
    
    {"a": 1, "c": 11, "b": 10}
    

    Answer #3:

    An alternative:

    z = x.copy()
    z.update(y)
    

    Answer #4:

    Another, more concise, option:

    z = dict(x, **y)
    

    Note: this has become a popular answer, but it is important to point out that if y has any non-string keys, the fact that this works at all is an abuse of a CPython implementation detail, and it does not work in Python 3, or in PyPy, IronPython, or Jython. Also, Guido is not a fan. So I can"t recommend this technique for forward-compatible or cross-implementation portable code, which really means it should be avoided entirely.

    Answer #5:

    This probably won"t be a popular answer, but you almost certainly do not want to do this. If you want a copy that"s a merge, then use copy (or deepcopy, depending on what you want) and then update. The two lines of code are much more readable - more Pythonic - than the single line creation with .items() + .items(). Explicit is better than implicit.

    In addition, when you use .items() (pre Python 3.0), you"re creating a new list that contains the items from the dict. If your dictionaries are large, then that is quite a lot of overhead (two large lists that will be thrown away as soon as the merged dict is created). update() can work more efficiently, because it can run through the second dict item-by-item.

    In terms of time:

    >>> timeit.Timer("dict(x, **y)", "x = dict(zip(range(1000), range(1000)))
    y=dict(zip(range(1000,2000), range(1000,2000)))").timeit(100000)
    15.52571702003479
    >>> timeit.Timer("temp = x.copy()
    temp.update(y)", "x = dict(zip(range(1000), range(1000)))
    y=dict(zip(range(1000,2000), range(1000,2000)))").timeit(100000)
    15.694622993469238
    >>> timeit.Timer("dict(x.items() + y.items())", "x = dict(zip(range(1000), range(1000)))
    y=dict(zip(range(1000,2000), range(1000,2000)))").timeit(100000)
    41.484580039978027
    

    IMO the tiny slowdown between the first two is worth it for the readability. In addition, keyword arguments for dictionary creation was only added in Python 2.3, whereas copy() and update() will work in older versions.

    What To Learn Java Or Javascript First: StackOverflow Questions

    Separation of business logic and data access in django

    I am writing a project in Django and I see that 80% of the code is in the file models.py. This code is confusing and, after a certain time, I cease to understand what is really happening.

    Here is what bothers me:

    1. I find it ugly that my model level (which was supposed to be responsible only for the work with data from a database) is also sending email, walking on API to other services, etc.
    2. Also, I find it unacceptable to place business logic in the view, because this way it becomes difficult to control. For example, in my application there are at least three ways to create new instances of User, but technically it should create them uniformly.
    3. I do not always notice when the methods and properties of my models become non-deterministic and when they develop side effects.

    Here is a simple example. At first, the User model was like this:

    class User(db.Models):
    
        def get_present_name(self):
            return self.name or "Anonymous"
    
        def activate(self):
            self.status = "activated"
            self.save()
    

    Over time, it turned into this:

    class User(db.Models):
    
        def get_present_name(self): 
            # property became non-deterministic in terms of database
            # data is taken from another service by api
            return remote_api.request_user_name(self.uid) or "Anonymous" 
    
        def activate(self):
            # method now has a side effect (send message to user)
            self.status = "activated"
            self.save()
            send_mail("Your account is activated!", "…", [self.email])
    

    What I want is to separate entities in my code:

    1. Entities of my database, persistence level: What data does my application keep?
    2. Entities of my application, business logic level: What does my application do?

    What are the good practices to implement such an approach that can be applied in Django?

    Answer #1:

    It seems like you are asking about the difference between the data model and the domain model – the latter is where you can find the business logic and entities as perceived by your end user, the former is where you actually store your data.

    Furthermore, I"ve interpreted the 3rd part of your question as: how to notice failure to keep these models separate.

    These are two very different concepts and it"s always hard to keep them separate. However, there are some common patterns and tools that can be used for this purpose.

    About the Domain Model

    The first thing you need to recognize is that your domain model is not really about data; it is about actions and questions such as "activate this user", "deactivate this user", "which users are currently activated?", and "what is this user"s name?". In classical terms: it"s about queries and commands.

    Thinking in Commands

    Let"s start by looking at the commands in your example: "activate this user" and "deactivate this user". The nice thing about commands is that they can easily be expressed by small given-when-then scenario"s:

    given an inactive user
    when the admin activates this user
    then the user becomes active
    and a confirmation e-mail is sent to the user
    and an entry is added to the system log
    (etc. etc.)

    Such scenario"s are useful to see how different parts of your infrastructure can be affected by a single command – in this case your database (some kind of "active" flag), your mail server, your system log, etc.

    Such scenario"s also really help you in setting up a Test Driven Development environment.

    And finally, thinking in commands really helps you create a task-oriented application. Your users will appreciate this :-)

    Expressing Commands

    Django provides two easy ways of expressing commands; they are both valid options and it is not unusual to mix the two approaches.

    The service layer

    The service module has already been described by @Hedde. Here you define a separate module and each command is represented as a function.

    services.py

    def activate_user(user_id):
        user = User.objects.get(pk=user_id)
    
        # set active flag
        user.active = True
        user.save()
    
        # mail user
        send_mail(...)
    
        # etc etc
    

    Using forms

    The other way is to use a Django Form for each command. I prefer this approach, because it combines multiple closely related aspects:

    • execution of the command (what does it do?)
    • validation of the command parameters (can it do this?)
    • presentation of the command (how can I do this?)

    forms.py

    class ActivateUserForm(forms.Form):
    
        user_id = IntegerField(widget = UsernameSelectWidget, verbose_name="Select a user to activate")
        # the username select widget is not a standard Django widget, I just made it up
    
        def clean_user_id(self):
            user_id = self.cleaned_data["user_id"]
            if User.objects.get(pk=user_id).active:
                raise ValidationError("This user cannot be activated")
            # you can also check authorizations etc. 
            return user_id
    
        def execute(self):
            """
            This is not a standard method in the forms API; it is intended to replace the 
            "extract-data-from-form-in-view-and-do-stuff" pattern by a more testable pattern. 
            """
            user_id = self.cleaned_data["user_id"]
    
            user = User.objects.get(pk=user_id)
    
            # set active flag
            user.active = True
            user.save()
    
            # mail user
            send_mail(...)
    
            # etc etc
    

    Thinking in Queries

    You example did not contain any queries, so I took the liberty of making up a few useful queries. I prefer to use the term "question", but queries is the classical terminology. Interesting queries are: "What is the name of this user?", "Can this user log in?", "Show me a list of deactivated users", and "What is the geographical distribution of deactivated users?"

    Before embarking on answering these queries, you should always ask yourself this question, is this:

    • a presentational query just for my templates, and/or
    • a business logic query tied to executing my commands, and/or
    • a reporting query.

    Presentational queries are merely made to improve the user interface. The answers to business logic queries directly affect the execution of your commands. Reporting queries are merely for analytical purposes and have looser time constraints. These categories are not mutually exclusive.

    The other question is: "do I have complete control over the answers?" For example, when querying the user"s name (in this context) we do not have any control over the outcome, because we rely on an external API.

    Making Queries

    The most basic query in Django is the use of the Manager object:

    User.objects.filter(active=True)
    

    Of course, this only works if the data is actually represented in your data model. This is not always the case. In those cases, you can consider the options below.

    Custom tags and filters

    The first alternative is useful for queries that are merely presentational: custom tags and template filters.

    template.html

    <h1>Welcome, {{ user|friendly_name }}</h1>
    

    template_tags.py

    @register.filter
    def friendly_name(user):
        return remote_api.get_cached_name(user.id)
    

    Query methods

    If your query is not merely presentational, you could add queries to your services.py (if you are using that), or introduce a queries.py module:

    queries.py

    def inactive_users():
        return User.objects.filter(active=False)
    
    
    def users_called_publysher():
        for user in User.objects.all():
            if remote_api.get_cached_name(user.id) == "publysher":
                yield user 
    

    Proxy models

    Proxy models are very useful in the context of business logic and reporting. You basically define an enhanced subset of your model. You can override a Manager’s base QuerySet by overriding the Manager.get_queryset() method.

    models.py

    class InactiveUserManager(models.Manager):
        def get_queryset(self):
            query_set = super(InactiveUserManager, self).get_queryset()
            return query_set.filter(active=False)
    
    class InactiveUser(User):
        """
        >>> for user in InactiveUser.objects.all():
        …        assert user.active is False 
        """
    
        objects = InactiveUserManager()
        class Meta:
            proxy = True
    

    Query models

    For queries that are inherently complex, but are executed quite often, there is the possibility of query models. A query model is a form of denormalization where relevant data for a single query is stored in a separate model. The trick of course is to keep the denormalized model in sync with the primary model. Query models can only be used if changes are entirely under your control.

    models.py

    class InactiveUserDistribution(models.Model):
        country = CharField(max_length=200)
        inactive_user_count = IntegerField(default=0)
    

    The first option is to update these models in your commands. This is very useful if these models are only changed by one or two commands.

    forms.py

    class ActivateUserForm(forms.Form):
        # see above
       
        def execute(self):
            # see above
            query_model = InactiveUserDistribution.objects.get_or_create(country=user.country)
            query_model.inactive_user_count -= 1
            query_model.save()
    

    A better option would be to use custom signals. These signals are of course emitted by your commands. Signals have the advantage that you can keep multiple query models in sync with your original model. Furthermore, signal processing can be offloaded to background tasks, using Celery or similar frameworks.

    signals.py

    user_activated = Signal(providing_args = ["user"])
    user_deactivated = Signal(providing_args = ["user"])
    

    forms.py

    class ActivateUserForm(forms.Form):
        # see above
       
        def execute(self):
            # see above
            user_activated.send_robust(sender=self, user=user)
    

    models.py

    class InactiveUserDistribution(models.Model):
        # see above
    
    @receiver(user_activated)
    def on_user_activated(sender, **kwargs):
            user = kwargs["user"]
            query_model = InactiveUserDistribution.objects.get_or_create(country=user.country)
            query_model.inactive_user_count -= 1
            query_model.save()
        
    

    Keeping it clean

    When using this approach, it becomes ridiculously easy to determine if your code stays clean. Just follow these guidelines:

    • Does my model contain methods that do more than managing database state? You should extract a command.
    • Does my model contain properties that do not map to database fields? You should extract a query.
    • Does my model reference infrastructure that is not my database (such as mail)? You should extract a command.

    The same goes for views (because views often suffer from the same problem).

    • Does my view actively manage database models? You should extract a command.

    Some References

    Django documentation: proxy models

    Django documentation: signals

    Architecture: Domain Driven Design

    Answer #2:

    I usually implement a service layer in between views and models. This acts like your project"s API and gives you a good helicopter view of what is going on. I inherited this practice from a colleague of mine that uses this layering technique a lot with Java projects (JSF), e.g:

    models.py

    class Book:
       author = models.ForeignKey(User)
       title = models.CharField(max_length=125)
    
       class Meta:
           app_label = "library"
    

    services.py

    from library.models import Book
    
    def get_books(limit=None, **filters):
        """ simple service function for retrieving books can be widely extended """
        return Book.objects.filter(**filters)[:limit]  # list[:None] will return the entire list
    

    views.py

    from library.services import get_books
    
    class BookListView(ListView):
        """ simple view, e.g. implement a _build and _apply filters function """
        queryset = get_books()
    

    Mind you, I usually take models, views and services to module level and separate even further depending on the project"s size

    Cosine Similarity between 2 Number Lists

    I want to calculate the cosine similarity between two lists, let"s say for example list 1 which is dataSetI and list 2 which is dataSetII.

    Let"s say dataSetI is [3, 45, 7, 2] and dataSetII is [2, 54, 13, 15]. The length of the lists are always equal. I want to report cosine similarity as a number between 0 and 1.

    dataSetI = [3, 45, 7, 2]
    dataSetII = [2, 54, 13, 15]
    
    def cosine_similarity(list1, list2):
      # How to?
      pass
    
    print(cosine_similarity(dataSetI, dataSetII))
    

    Answer #1:

    You should try SciPy. It has a bunch of useful scientific routines for example, "routines for computing integrals numerically, solving differential equations, optimization, and sparse matrices." It uses the superfast optimized NumPy for its number crunching. See here for installing.

    Note that spatial.distance.cosine computes the distance, and not the similarity. So, you must subtract the value from 1 to get the similarity.

    from scipy import spatial
    
    dataSetI = [3, 45, 7, 2]
    dataSetII = [2, 54, 13, 15]
    result = 1 - spatial.distance.cosine(dataSetI, dataSetII)
    

    Answer #2:

    another version based on numpy only

    from numpy import dot
    from numpy.linalg import norm
    
    cos_sim = dot(a, b)/(norm(a)*norm(b))
    

    Haversine Formula in Python (Bearing and Distance between two GPS points)

    Problem

    I would like to know how to get the distance and bearing between 2 GPS points. I have researched on the haversine formula. Someone told me that I could also find the bearing using the same data.

    Edit

    Everything is working fine but the bearing doesn"t quite work right yet. The bearing outputs negative but should be between 0 - 360 degrees. The set data should make the horizontal bearing 96.02166666666666 and is:

    Start point: 53.32055555555556 , -1.7297222222222221   
    Bearing:  96.02166666666666  
    Distance: 2 km  
    Destination point: 53.31861111111111, -1.6997222222222223  
    Final bearing: 96.04555555555555
    

    Here is my new code:

    from math import *
    
    Aaltitude = 2000
    Oppsite  = 20000
    
    lat1 = 53.32055555555556
    lat2 = 53.31861111111111
    lon1 = -1.7297222222222221
    lon2 = -1.6997222222222223
    
    lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2])
    
    dlon = lon2 - lon1
    dlat = lat2 - lat1
    a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2
    c = 2 * atan2(sqrt(a), sqrt(1-a))
    Base = 6371 * c
    
    
    Bearing =atan2(cos(lat1)*sin(lat2)-sin(lat1)*cos(lat2)*cos(lon2-lon1), sin(lon2-lon1)*cos(lat2)) 
    
    Bearing = degrees(Bearing)
    print ""
    print ""
    print "--------------------"
    print "Horizontal Distance:"
    print Base
    print "--------------------"
    print "Bearing:"
    print Bearing
    print "--------------------"
    
    
    Base2 = Base * 1000
    distance = Base * 2 + Oppsite * 2 / 2
    Caltitude = Oppsite - Aaltitude
    
    a = Oppsite/Base
    b = atan(a)
    c = degrees(b)
    
    distance = distance / 1000
    
    print "The degree of vertical angle is:"
    print c
    print "--------------------"
    print "The distance between the Balloon GPS and the Antenna GPS is:"
    print distance
    print "--------------------"
    

    Answer #1:

    Here"s a Python version:

    from math import radians, cos, sin, asin, sqrt
    
    def haversine(lon1, lat1, lon2, lat2):
        """
        Calculate the great circle distance in kilometers between two points 
        on the earth (specified in decimal degrees)
        """
        # convert decimal degrees to radians 
        lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2])
    
        # haversine formula 
        dlon = lon2 - lon1 
        dlat = lat2 - lat1 
        a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2
        c = 2 * asin(sqrt(a)) 
        r = 6371 # Radius of earth in kilometers. Use 3956 for miles. Determines return value units.
        return c * r
    

    What To Learn Java Or Javascript First: StackOverflow Questions

    How to execute a program or call a system command?

    Question by alan lai

    How do you call an external command (as if I"d typed it at the Unix shell or Windows command prompt) from within a Python script?

    Answer #1:

    Use the subprocess module in the standard library:

    import subprocess
    subprocess.run(["ls", "-l"])
    

    The advantage of subprocess.run over os.system is that it is more flexible (you can get the stdout, stderr, the "real" status code, better error handling, etc...).

    Even the documentation for os.system recommends using subprocess instead:

    The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function. See the Replacing Older Functions with the subprocess Module section in the subprocess documentation for some helpful recipes.

    On Python 3.4 and earlier, use subprocess.call instead of .run:

    subprocess.call(["ls", "-l"])
    

    Answer #2:

    Here"s a summary of the ways to call external programs and the advantages and disadvantages of each:

    1. os.system("some_command with args") passes the command and arguments to your system"s shell. This is nice because you can actually run multiple commands at once in this manner and set up pipes and input/output redirection. For example:

      os.system("some_command < input_file | another_command > output_file")  
      

      However, while this is convenient, you have to manually handle the escaping of shell characters such as spaces, et cetera. On the other hand, this also lets you run commands which are simply shell commands and not actually external programs. See the documentation.

    2. stream = os.popen("some_command with args") will do the same thing as os.system except that it gives you a file-like object that you can use to access standard input/output for that process. There are 3 other variants of popen that all handle the i/o slightly differently. If you pass everything as a string, then your command is passed to the shell; if you pass them as a list then you don"t need to worry about escaping anything. See the documentation.

    3. The Popen class of the subprocess module. This is intended as a replacement for os.popen, but has the downside of being slightly more complicated by virtue of being so comprehensive. For example, you"d say:

      print subprocess.Popen("echo Hello World", shell=True, stdout=subprocess.PIPE).stdout.read()
      

      instead of

      print os.popen("echo Hello World").read()
      

      but it is nice to have all of the options there in one unified class instead of 4 different popen functions. See the documentation.

    4. The call function from the subprocess module. This is basically just like the Popen class and takes all of the same arguments, but it simply waits until the command completes and gives you the return code. For example:

      return_code = subprocess.call("echo Hello World", shell=True)
      

      See the documentation.

    5. If you"re on Python 3.5 or later, you can use the new subprocess.run function, which is a lot like the above but even more flexible and returns a CompletedProcess object when the command finishes executing.

    6. The os module also has all of the fork/exec/spawn functions that you"d have in a C program, but I don"t recommend using them directly.

    The subprocess module should probably be what you use.

    Finally, please be aware that for all methods where you pass the final command to be executed by the shell as a string and you are responsible for escaping it. There are serious security implications if any part of the string that you pass can not be fully trusted. For example, if a user is entering some/any part of the string. If you are unsure, only use these methods with constants. To give you a hint of the implications consider this code:

    print subprocess.Popen("echo %s " % user_input, stdout=PIPE).stdout.read()
    

    and imagine that the user enters something "my mama didnt love me && rm -rf /" which could erase the whole filesystem.

    Answer #3:

    Typical implementation:

    import subprocess
    
    p = subprocess.Popen("ls", shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
    for line in p.stdout.readlines():
        print line,
    retval = p.wait()
    

    You are free to do what you want with the stdout data in the pipe. In fact, you can simply omit those parameters (stdout= and stderr=) and it"ll behave like os.system().

    Answer #4:

    Some hints on detaching the child process from the calling one (starting the child process in background).

    Suppose you want to start a long task from a CGI script. That is, the child process should live longer than the CGI script execution process.

    The classical example from the subprocess module documentation is:

    import subprocess
    import sys
    
    # Some code here
    
    pid = subprocess.Popen([sys.executable, "longtask.py"]) # Call subprocess
    
    # Some more code here
    

    The idea here is that you do not want to wait in the line "call subprocess" until the longtask.py is finished. But it is not clear what happens after the line "some more code here" from the example.

    My target platform was FreeBSD, but the development was on Windows, so I faced the problem on Windows first.

    On Windows (Windows XP), the parent process will not finish until the longtask.py has finished its work. It is not what you want in a CGI script. The problem is not specific to Python; in the PHP community the problems are the same.

    The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in Windows API. If you happen to have installed pywin32, you can import the flag from the win32process module, otherwise you should define it yourself:

    DETACHED_PROCESS = 0x00000008
    
    pid = subprocess.Popen([sys.executable, "longtask.py"],
                           creationflags=DETACHED_PROCESS).pid
    

    /* UPD 2015.10.27 @eryksun in a comment below notes, that the semantically correct flag is CREATE_NEW_CONSOLE (0x00000010) */

    On FreeBSD we have another problem: when the parent process is finished, it finishes the child processes as well. And that is not what you want in a CGI script either. Some experiments showed that the problem seemed to be in sharing sys.stdout. And the working solution was the following:

    pid = subprocess.Popen([sys.executable, "longtask.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
    

    I have not checked the code on other platforms and do not know the reasons of the behaviour on FreeBSD. If anyone knows, please share your ideas. Googling on starting background processes in Python does not shed any light yet.

    Answer #5:

    import os
    os.system("your command")
    

    Note that this is dangerous, since the command isn"t cleaned. I leave it up to you to google for the relevant documentation on the "os" and "sys" modules. There are a bunch of functions (exec* and spawn*) that will do similar things.

    What To Learn Java Or Javascript First: StackOverflow Questions

    How do I calculate percentiles with python/numpy?

    Is there a convenient way to calculate percentiles for a sequence or single-dimensional numpy array?

    I am looking for something similar to Excel"s percentile function.

    I looked in NumPy"s statistics reference, and couldn"t find this. All I could find is the median (50th percentile), but not something more specific.

    Answer #1:

    You might be interested in the SciPy Stats package. It has the percentile function you"re after and many other statistical goodies.

    percentile() is available in numpy too.

    import numpy as np
    a = np.array([1,2,3,4,5])
    p = np.percentile(a, 50) # return 50th percentile, e.g median.
    print p
    3.0
    

    This ticket leads me to believe they won"t be integrating percentile() into numpy anytime soon.

Shop

Best laptop for Sims 4

$

Best laptop for Zoom

$499

Best laptop for Minecraft

$590

Best laptop for engineering student

$

Best laptop for development

$

Best laptop for Cricut Maker

$

Best laptop for hacking

$890

Best laptop for Machine Learning

$950

Latest questions

NUMPYNUMPY

psycopg2: insert multiple rows with one query

12 answers

NUMPYNUMPY

How to convert Nonetype to int or string?

12 answers

NUMPYNUMPY

How to specify multiple return types using type-hints

12 answers

NUMPYNUMPY

Javascript Error: IPython is not defined in JupyterLab

12 answers

News

Wiki

Python OpenCV | cv2.putText () method

numpy.arctan2 () in Python

Python | os.path.realpath () method

Python OpenCV | cv2.circle () method

Python OpenCV cv2.cvtColor () method

Python - Move item to the end of the list

time.perf_counter () function in Python

Check if one list is a subset of another in Python

Python os.path.join () method