ANI

How to increase your Python code even if you are the beginning

How to increase your Python code even if you are the beginning
Photo by writer | Ideogram

Let's be honest. When reading Python, you may not think about working. You just try to make your code work! But here is the item: Making your Python code immediately requires you to be a professional ruler overnight.

With a few simple strategies I will show you today, you can improve your code speed and use the most memory.

In this article, we will travel with five active practical strategies. In each of you, I will show you the “Before” For many you write to them) ,, “After” Code (well-made version), and explain why it works quickly.

🔗 Link to the code in Githubub

1. Replace the obstacles of the list of lists

Let's start with something you may always do: Create a new list of existing conversion. Many beginners reach the loop, but Python has a quick way to do this.

Before you are well-employed

Here is how many starters can square the number list:

import time

def square_numbers_loop(numbers):
    result = [] 
    for num in numbers: 
        result.append(num ** 2) 
    return result

# Let's test this with 1000000 numbers to see the performance
test_numbers = list(range(1000000))

start_time = time.time()
squared_loop = square_numbers_loop(test_numbers)
loop_time = time.time() - start_time
print(f"Loop time: {loop_time:.4f} seconds")

This code creates an empty list called results, and includes each number in our installation list, owing, and uses the existing list. Is right, of course?

After good work

Now let's write this using a list of listing:

def square_numbers_comprehension(numbers):
    return [num ** 2 for num in numbers]  # Create the entire list in one line

start_time = time.time()
squared_comprehension = square_numbers_comprehension(test_numbers)
comprehension_time = time.time() - start_time
print(f"Comprehension time: {comprehension_time:.4f} seconds")
print(f"Improvement: {loop_time / comprehension_time:.2f}x faster")

This one-line [num ** 2 for num in numbers] Do the same thing with our loop, but tells Python “Create a list where each item is a compatible object with associated number.”

Which is output:

Loop time: 0.0840 seconds
Comprehension time: 0.0736 seconds
Improvement: 1.14x faster

Development of work: Phximent The Propitions is usually 30-50% faster than the same loops. Development is more visible when working with serious problems.

Why is this job? List understanding is used in C under the hood, so they avoid much filling of the Python loops, such items as changing calls and work calls behind the scenes.

2. Choose the correct data structure of work

This great is, and it's something that can make your code quick times immediately with just a little change. The key to understanding is that using the opposite list of dictionaries.

Before you are well-employed

Imagine that you want to find common things among two lists. Here is an accurate way:

def find_common_elements_list(list1, list2):
    common = []
    for item in list1:  # Go through each item in the first list
        if item in list2:  # Check if it exists in the second list
            common.append(item)  # If yes, add it to our common list
    return common

# Test with reasonably large lists
large_list1 = list(range(10000))     
large_list2 = list(range(5000, 15000))

start_time = time.time()
common_list = find_common_elements_list(large_list1, large_list2)
list_time = time.time() - start_time
print(f"List approach time: {list_time:.4f} seconds")

This code comes into the first list, and in each item, checking that item is in the second list using the item in list2. Problem? When you do something inside list2Python should search for all the second list until he gets something. That is slow!

After good work

Here's the same, but using a quick check set:

def find_common_elements_set(list1, list2):
    set2 = set(list2)  # Convert list to a set (one-time cost)
    return [item for item in list1 if item in set2]  # Check membership in set

start_time = time.time()
common_set = find_common_elements_set(large_list1, large_list2)
set_time = time.time() - start_time
print(f"Set approach time: {set_time:.4f} seconds")
print(f"Improvement: {list_time / set_time:.2f}x faster")

First, we turn the list into a set. Then, instead of checking if the item in list2We are looking for an item in set2. This small change makes a membership test almost immediately.

Which is output:

List approach time: 0.8478 seconds
Set approach time: 0.0010 seconds
Improvement: 863.53x faster

Development of work: This can be 100x order immediately in a large dataset.

Why is this job? Sets using Hash tables under the hood. If you look at the fact that it is set, Python is searching for all things; Using a hash to skip exactly when something should be. It's like a book reference instead of reading all the pages to get what you want.

3. Use built-within Python activities whenever possible

Python comes with tons of jobs built well. Before you write your own loop or custom activity to do something, check that Python has already got their job.

Before you are well-employed

Here is how you can read the sum and the number of lists if you do not know about the Ins built:

def calculate_sum_manual(numbers):
    total = 0
    for num in numbers:  
        total += num     
    return total

def find_max_manual(numbers):
    max_val = numbers[0] 
    for num in numbers[1:]: 
        if num > max_val:    
            max_val = num   
    return max_val

test_numbers = list(range(1000000))  

start_time = time.time()
manual_sum = calculate_sum_manual(test_numbers)
manual_max = find_max_manual(test_numbers)
manual_time = time.time() - start_time
print(f"Manual approach time: {manual_time:.4f} seconds")

This page sum Work begins with a total of 0, and added each number to that item. This page max The job begins by taking the first number and compares the whole number to see that big.

After good work

Here is the same thing that uses built-within Python activities:

start_time = time.time()
builtin_sum = sum(test_numbers)    
builtin_max = max(test_numbers)    
builtin_time = time.time() - start_time
print(f"Built-in approach time: {builtin_time:.4f} seconds")
print(f"Improvement: {manual_time / builtin_time:.2f}x faster")

That's all! sum() gives value of all numbers to the list, and max() Returns the largest number. The same effect, as soon as possible.

Which is output:

Manual approach time: 0.0805 seconds
Built-in approach time: 0.0413 seconds
Improvement: 1.95x faster

Development of work: Built-in-built jobs are usually more faster than manualization.

Why is this job? Built-in-Bed Python activities are written on C and is very configured.

4. Make efficient functional tasks to join

String concatenation is something all programs are doing, but many beginners do it in a little detected as the ropes get longer.

Before you are well-employed

Here's how you can build a CSV thread in association with the operator +:

def create_csv_plus(data):
    result = ""  # Start with an empty string
    for row in data:  # Go through each row of data
        for i, item in enumerate(row):  # Go through each item in the row
            result += str(item)  # Add the item to our result string
            if i < len(row) - 1:  # If it's not the last item
                result += ","     # Add a comma
        result += "n"  # Add a newline after each row
    return result

# Test data: 1000 rows with 10 columns each
test_data = [[f"item_{i}_{j}" for j in range(10)] for i in range(1000)]

start_time = time.time()
csv_plus = create_csv_plus(test_data)
plus_time = time.time() - start_time
print(f"String concatenation time: {plus_time:.4f} seconds")

This code forms our CSV thread with a piece. For each line, it passes each thing, it turns it into a thread, and adds our effect. Adds commons among things and the newlines between lines.

After good work

Here is the same code using the Joining method:

def create_csv_join(data):
    # For each row, join the items with commas, then join all rows with newlines
    return "n".join(",".join(str(item) for item in row) for row in data)

start_time = time.time()
csv_join = create_csv_join(test_data)
join_time = time.time() - start_time
print(f"Join method time: {join_time:.4f} seconds")
print(f"Improvement: {plus_time / join_time:.2f}x faster")

This one line does more! Internal part ",".join(str(item) for item in row) It takes each line and join all things with commas. External part "n".join(...) You take all those lines separated by the comma and join it with newlines.

Which is output:

String concatenation time: 0.0043 seconds
Join method time: 0.0022 seconds
Improvement: 1.94x faster

Development of work: Joining with string is more faster than concatentation of large string.

Why is this job? When using concatenate wires, Python builds a new rope item each time the cables are disturbed. With great pieces, this becomes very spilled. This page join The way it finds how much memory memory finds and built the rope once.

5. Use Description Generators to Remember Remember

Sometimes you don't need to keep all your data in memory at the same time. Generators allow you to create search data, which can save large amounts of memory.

Before you are well-employed

Here's how you can process a big data by keeping everything listed list:

import sys

def process_large_dataset_list(n):
    processed_data = []  
    for i in range(n):
        # Simulate some data processing
        processed_value = i ** 2 + i * 3 + 42
        processed_data.append(processed_value)  # Store each processed value
    return processed_data

# Test with 100,000 items
n = 100000
list_result = process_large_dataset_list(n)
list_memory = sys.getsizeof(list_result)
print(f"List memory usage: {list_memory:,} bytes")

This activity processes numbers from 0 to N-1, working to calculate something in each of them (unique, multiplying 32), and added 42 results to the list. The problem is that we keep all 100,000 prices in memory at the same time.

After good work

Here's the same processing using generator:

def process_large_dataset_generator(n):
    for i in range(n):
        # Simulate some data processing
        processed_value = i ** 2 + i * 3 + 42
        yield processed_value  # Yield each value instead of storing it

# Create the generator (this doesn't process anything yet!)
gen_result = process_large_dataset_generator(n)
gen_memory = sys.getsizeof(gen_result)
print(f"Generator memory usage: {gen_memory:,} bytes")
print(f"Memory improvement: {list_memory / gen_memory:.0f}x less memory")

# Now we can process items one at a time
total = 0
for value in process_large_dataset_generator(n):
    total += value
    # Each value is processed on-demand and can be garbage collected

Important Difference yield instead of append. This page yield Keyword does this functionality of Generator – produces prices at once instead of creating everything at once.

Which is output:

List memory usage: 800,984 bytes
Generator memory usage: 224 bytes
Memory improvement: 3576x less memory

Development of work: Generators can use a lot of “many” memories of a little detail.

Why is this job? Generators use a lazy test, only include prices when you ask them. The generator object itself is in connection. Remembering where it is compiled.

Store

Python code preparation doesn't have to be ours. As we have seen, small changes in how approaching normal programs of programs can produce amazing development of speed use and memory. Key is developing a sense of optimal task per employee.

Remember these important principles: Use built-in duties where available, select the appropriate data structures for your case, avoid unnecessary work, and remember how Python is treating. Listing, Social Supervisory Settings, Join String, Generates Genasets All Tools to be at all the first Python Programmer's. Continue reading, save the codes!

Count Priya c He is the writer and a technical writer from India. He likes to work in mathematical communication, data science and content creation. His areas of interest and professionals includes deliefs, data science and natural language. She enjoys reading, writing, codes, and coffee! Currently, he works by reading and sharing his knowledge and engineering society by disciples of teaching, how they guide, pieces of ideas, and more. Calculate and create views of the resources and instruction of codes.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button