How I solved “jQuery Ajax Uncaught TypeError: Cannot read property ‘type’ of undefined”

A solution for an error occurring during a jQuery $.ajax request.

I was using this common jQuery Ajax pattern on a page I am working:

    $(function () {
        $(document).on('click', '.create-domain .submit', function (e) {
            e.preventDefault();

            var data = {
                domain_description: $('.create-domain .domain-description-textarea')
        }

            $.ajax({
                type: 'post',
                url: '/process/something.php',
                data: data,
                error: function (data) {
                    console.debug(data);
                },
                success: function (response) {
                   //stuff
                }
            });
        });

But on clicking the submit element, I kept getting this cryptic error:

Uncaught TypeError: Cannot read property 'type' of undefined
    at r.handle (jquery-2.2.4.min.js:3)
    at e (jquery-2.2.4.min.js:4)
    at Gb (jquery-2.2.4.min.js:4)
    at Gb (jquery-2.2.4.min.js:4)
    at Gb (jquery-2.2.4.min.js:4)
    at Gb (jquery-2.2.4.min.js:4)
    at Function.n.param (jquery-2.2.4.min.js:4)
    at Function.ajax (jquery-2.2.4.min.js:4)
    at HTMLButtonElement. (something.php:575)
    at HTMLDocument.dispatch (jquery-2.2.4.min.js:3)
    at HTMLDocument.r.handle (jquery-2.2.4.min.js:3)

The problem was that in the data variable, I was including an HTML element (a textarea) inside the data variable, instead of including the textarea‘s content. Thus the corrected code is (notice the .val() at the end):

            var data = {
                domain_description: $('.create-domain .domain-description-textarea').val(),
        }

Hopefully this will help a few people, helping making the world economy more efficient by 1*10-12% (saving the world economy $107 USD over the next year).

Fixing the kworker CPU usage / ACPI errors issue on a Skylake motherboard (ASRock Z170 Pro4)

In which ASRock bricks my motherboard and a random $10 Chinese device comes to the rescue, with the help of a German gentleman

Since I no longer trust the spyware that is Windows 10, I have wanted to move my main PC (6700K CPU, R9 290 graphic card, ASRock Z170 Pro4 motherboard) to Linux for months now and finally did it yesterday. Everything worked as expected until, while working inside Ubuntu, I started getting messages that the computer was low on disk space even though I had allocated 25 gigabytes to the root partition.

Using ncdu in the terminal, I found that the log folder was taking up all the space, and found that /var/log/kern.log and /var/log/syslog were being written to at what seemed to be a rate of 1 MB/second, with endless repetitions of:

ACPI Error: Method parse/execution failed [\_GPE._L6F] (Node ...), AE_NOT_FOUND (...)

Another issue was that the kworker process was using constantly 100% of one of the eight CPU cores.

Forums suggested this was a motherboard firmware issue. So I decided to do a firmware update. My firmware was a pretty early one, something like version 1.5, while the latest available firmware is 7.3. I went to the UEFI interface and tried using the “Internet Flash” utility provided by ASRock. It successfully retrieved the fact that there was a 7.5 version update available to the firmware, but when clicking on update, it would conveniently fail to connect to the internet. Somehow the geniuses at ASRock had created software that could connect to the internet to ask if an update was available, but on downloading the update it would fail to connect to the internet. Still, I am glad that we are light years ahead of the pain, anguish and days of wasted labor that we used to suffer in the 90’s to fix a simple hardware issue.

I downloaded the BIOS binary file from the ASRock website, put it on a USB flash drive, and went to UEFI interface again, this time using the “Instant Flash” utility. The first time I tried it, the computer instantly crashed and rebooted, and nothing else happened. I tried a second time. This time it seemed to work, until the firmware update got stuck at 10%. I waited for hours to see if it would finish, but it didn’t. I left my computer on overnight, thinking that there might still be a tiny chance it would eventually finish. In the morning it hadn’t. So I hard rebooted my PC, and then nothing. It would turn on, but it wouldn’t give any output, not even the ASRock logo that shows at the beginning.

Knowing that the BIOS chip had probably become corrupted from the update and that I had probably upgraded my motherboard from an ASRock to an ASBrick, I looked to see what could be done. After yanking on the BIOS chip on the motherboard for a while, I found that it was designed to come off, so I took it out. I then learned about devices that can reflash a corrupted BIOS chip. I found out about the the Chinese device CH341A  that sells for about $10. I ordered one made by a company called SMAKN on Amazon with overnight delivery. This morning it arrived.

At first I was dismayed to see that there were three unattached pieces, I thought they might need soldiering:

But after watching this video by UltraNSC, I found that I wouldn’t be needing those pieces. I installed the software provided in the description of the video on an old but working Windows 7 laptop that I have, inserted the device, tried installing all the drivers in the file, and still the software (CH341A.exe) wouldn’t detect the device. I unplugged the device and moved to another USB port, and this time the software detected it.

The software detected that the BIOS chip had a size of 16 megabytes, similar to the binary file provided by ASRock. This was a good sign. I erased the BIOS chip with the software, then tried to open the binary file with the software but it wouldn’t detect it because the file provided by ASRock doesn’t have a filetype extension. I renamed the file to have a .bin extension, and now the software could see it. I loaded the file and clicked “Program” to write it to the chip. Everything worked without a problem. I clicked “Verify” to make absolutely sure the data was copied without error and that came out positive.

I put the BIOS chip back into the motherboard and turned the computer on. A message by American Megatrends came up, and clicking F12, it took me to the UEFI interface. I rebooted and was immediately taken into Windows as the UEFI had forgotten my preferred boot device order. Windows tried to do some sort of automatic repair then restarted the computer, at which point I went into the UEFI interface and told it to use my main SSD as the boot device. Restarting, I was taken into grub, and from there went into Ubuntu. Logging in, I saw that kworker wasn’t acting up anymore, and that the logs weren’t being flooded.

Now it is time to install Windows 7 in a networking-disabled virtual machine inside Ubuntu so that I can continue using OneNote and Photoshop without sending all my data to Microsoft. I have also kept my Windows 10 installation on another partition just in case I ever need it, for example to play Battlefield 1, though it seems I’ve become enough of an adult that video games barely interest me anymore, though I still enjoy watching Stodeh on Twitch.

Solve the invisible spaces problem in Word 2013

An annoying issue in Word 2013 is that sometimes the space key seems to stop working, until you press a non-space character, at which point Word deigns to show you both the space and non-space characters.

To solve the problem, press enter to create a new line, then go back to your line. The problem is caused by a bug in Word where having a page break or section break right after the line you are on prevents spaces from showing. Make sure there is a line (empty or not) below the line you are typing on, and the problem disappears.

How to export the entire sequence by default in Adobe Premiere Pro CS6

  1. Move the yellow playback marker far to the right, until it goes into the blank area and the preview window becomes black. If you are doing batch work, move the marker farther than any of your clips are going to be. For example, if you are exporting 1 minute videos, move the marker to the 2 minute mark.
  2. On the bar below the playback marker’s bar, find the right end of the selection bar and move it to the far left, so that there are 0 seconds selected. The left end of the selection marker should also be to the far left, obviously.
  3. That’s all. Now when exporting, Premiere will automatically select the entire sequence for export.

Islam Question & Answer: Why there are so few Christian terrorists

Color me curious. Raised Protestant, joined American Navy and saw the world, the Dome of the Rock is a supremely beautiful building. Such beauty, why NO COMPASSION! by radicals? I don’t understand the mindset. .. Beauty and hate The issue is not religion, but politics. Radical Muslims are no different from radical communists. They believe […]

Read more...

AWS Storage Historical Pricing and Future Projections

Some blogs are calling the recent price wars between cloud providers “a race to zero”. But this is the wrong way to think about it. As technology progresses, we simply need to start thinking in terms of larger units.

Here is a table of historical Amazon S3 prices:

Date $/GB/Month $/TB/Month
14-Mar-06 0.15 150
1-Nov-08 0.15 150
1-Nov-10 0.14 140
1-Feb-12 0.125 125
1-Dec-12 0.095 95
1-Feb-14 0.085 85
1-Apr-14 0.03 30

In terms of gigabytes the prices seem to be approaching zero. But in terms of terabytes, the prices are just barely starting to become reasonable. The linear projection below suggests that we will be using terabytes as our unit of choice when speaking of cloud storage until 2020 and later, when prices will start going below $1 per terabyte per month.

Some time after 2020, perhaps around 2025, we will start speaking in terms of petabytes per month.

Fire Phone folder where screenshots are stored

Using my Windows 7 computer to browse the Fire Phone’s files, I found the screenshots in the following folder:

Computer\Fire\Internal storage\Pictures\Screenshots

To take screenshots, you need to hold down the volume down and power buttons together. You will hear a sound and see an animation informing you that the screenshot was successfully taken.

Islam Question & Answer: Horoscopes and Islam

A Muslim should believe or read horoscopes or not? Because I saw a post that says the person who believes in horoscopes is a disbeliever. Horoscopes go under the category of superstition, since there is no basis in science or religion for them. Therefore a well educated and intelligent Muslim should take them for what they […]

Read more...

List of 20,000 right-angled triangles with whole-number sides

Some mathematical investigations can benefit from having a handy list of right-angled triangles with whole number sides. We know of the common [a = 3,b = 4, c = 5] triangle often used to illustrate the Pythagorean theorem (5^2 = sqrt(3^2 + 4^2)), but sometimes we need more of these. For this reason I made the following lists, placed inside handy text files. They start from the smallest possible triangle (the [3,4,5] one) and iterate up.

List of 20,000 right-angled triangles with whole-number sides sorted by the smallest side (i.e. side a).

List of 20,000 right-angled triangles with whole-number sides sorted by the largest side (i.e. the hypotenuse or side c).

Mashing two regular expressions together in JavaScript on the fly

var pattern1 = /Aug/;
var pattern2 = /ust/;
var fullpattern = (new RegExp( (pattern1+'').replace(/^\/(.*)\/$/,'$1') + (pattern2+'').replace(/^\/(.*)\/$/,'$1') ));

Explanation:

  • pattern1+'' turns (“casts”) the regular expression object into a string.
  • .replace(/^\/(.*)\/$/,'$1') removes the beginning and ending slashes from the pattern
  • new RegExp() turns the resultant string into a regular expression object. There is no need to add back a regular expression delimiter (i.e. slashes usually) since the RegExp() function (“constructor”) adds the delimiter if it is lacking.
  • If you want the resultant expression to have a flag, for example i, you add it so: new RegExp(string,'i');
  • This code is quite unreadable and you might be doing yourself and others a kindness if you use a less clever method. To make it more readable, the technique can be wrapped in a function:
var rmash = function(reg1,reg2) {
var fullpattern = (new RegExp( (reg1+'').replace(/^\/(.*)\/$/,'$1') + (reg2+'').replace(/^\/(.*)\/$/,'$1') ));
return fullpattern;
};

var my_new_pattern = rmash(pattern1,pattern2);

Generalizing the mash function to handle an arbitrary number of regular expressions and flags is left as an exercise.

How to do long-running computations in JavaScript while avoiding the “maximum call stack size exceeded” error

The following program calculates the value of the series of the Basel Problem. The result is a number that starts with 1.644934. Like π, this sequence can go on forever, which means the program never exits. Without proper design, such a program runs into the maximum call stack size exceeded error, which is designed to prevent a program from using too much memory.

var cr = 1;
var total = 0;
var x = function() {

    total = total + (1/(cr*cr));

    
    if(! (cr % 20000)) {
        $('#t1').val(total);
        $('#t2').val(cr);
        setTimeout(x,0);
    }
    else {
        x();
    }
    cr++;

};
x(); //initial call to x().

The solution is to add a setTimeout call somewhere in the program before things get too close to exceeding the call stack. In the above program, cr is a counter variable that starts with 1 and increases by 1 for every iteration of the x function. Using the conditional if(! (cr % 20000)) allows the program to catch its breath every 20,000 iterations and empties the call stack. It checks whether cr is divisible by 20,000 without a remainder. If it is not, we do nothing and let the program run its course. But if is divisible without a remainer, it means we have reached the end of a 20,000 iteration run. When this happens, we output the value of the total and the cr variables to two textboxes, t1 and t2.

Next, instead of calling x() the normal way, we call it via setTimeout(x,0);. As you know, setTimeout is genearlly used to run a function after a certain amount of time has passed, which is why usually the second argument is non-zero. But in this case, we do not need any wait time. The fact that we are calling x() via setTimeout is what matters, as this breaks the flow of the program, allowing proper screen output of the variables and the infinite continuation of the program.

The program is extremely fast, doing 1 million iterations about every 2.4 seconds on my computer. The result (the value of total) is not perfectly accurate due to the limitations of JavaScript numbers. More accuracy can be had using an extended numbers library.

You may wonder why we cannot put all calls to x() inside a setTimeout(). The reason is that doing so prevents the JavaScript interpreter from optimizing the program, causing it to run extremely slowly (about 1000 iterations per second on my computer). Using the method above, we run the program in optimized blocks of 20,000 iterations (the first block is actually 19,999 iterations since cr starts from 1, but for simplicity I have said 20,000 throughout the article).

Using an object anonymously in JavaScript

var month = 'Jan'; //or another three-letter abbreviation

//After the following operation, proper_month will contain the string "January".
var proper_month = {'Jan':'January',
                              'Feb': 'February',
                              'Mar' : 'March',
                              'Apr' : 'April',
                              'May'   : 'May',
                              'Jun'  : 'June',
                              'Jul'  : 'July',
                              'Aug'   : 'August',
                              'Sep'  : 'September',
                              'Oct'   : 'October',
                              'Nov'   : 'November',
                              'Dec'   : 'December'
                             
                             }[month];

How to: Become wise

/ No Comments on How to: Become wise

If you want to become wise, read 100 books that interest you. The books you choose to read can be about any topic and they can be of any quality, good or bad. The important thing is that you should find them interesting, because the fact that you find a book interesting means it contains information that is new1 to you (and thus it increases wisdom), because “interesting” simply means “something that provides new information to the brain”.

No book is going to solve all of your problems. Each book may make you a 1% wiser person. Thus if you want to become double as wise as you are now, you would have to read about 70 books. 100 books would be a safer number.

Some of the books you read will contain false information, because almost any book will contain some claims and assumptions that are false. But if you don’t give up and continue reading books one after another, as your knowledge increases, so will your awareness of what is true and what is false. Wisdom is simply a map of reality (accurate information about how things really are), and each book you read (even a simple story) tries to give you a small piece of the map. Some books will give you false pieces that do not describe anything that actually exists on the map. But as you read more books, your knowledge increases about the other pieces that surround the false piece, and thus you start to have an intuitive sense of what the false piece should actually look like, and thus you recognize the false piece for what it is: false. Recognition of the falsehood in itself increases your knowledge, for your brain can abstract the patterns of falsehood, and it can actually build a map of what falsehood itself looks like, and thus it will become increasingly hard for falsehoods to mislead you.

If you start to read a book that at first seems interesting, but eventually lose interest in it and start to find it boring and tiring, you should feel no qualms about abandoning the book and starting another. When this happens, it can be due to one of two things:

  1. The book does not contain anything that’s new to you, and thus your brain recognizes it as a repetition of things that you already know very well, and therefore you brain is asking you to stop wasting your time with the book.
  2. The book contains information that has too many prerequisites, and thus your brain is not equipped to handle the information. You should abandon the book now and return to it after reading many other books.

Spend a year doing this and at the end of it you may laugh at how unwise and biased you used to be a year ago. During your journey you would have picked up some new biases, therefore it is unwise to stop your journey. Continue reading books and these biases will be cleared up. You will never stop picking up biases, but their frequency will decrease as your wisdom increases, for biases have patterns of their own and the wise mind can learn to avoid many of them. This is why you find the wisest people to be those who are least ready to make final judgments on any topic–they are “open-minded”, knowing when they do not have enough information.

In most cases, when it comes to most topics, humans rarely have perfect knowledge, therefore the wisest often refuse to give final answers on anything or to give counsel freely to those who ask for it. They will speak about what they know, and refuse to delve into what they do not know.

Import and play your own audiobooks on the Amazon Fire Phone

[Update: I now recommend using these steps to install the Google Play Store (which does not “root” the device and does not cause any permanent changes), then buying the highly rated Listen Audiobook Player (which has up to 3x playback speed with pitch correction and a slider that shows your place in the book and how much is left–while taking playback speed into account) in the Play Store for $0.99. The entire process takes about 15 minutes.]

Amazon makes it impossible to import audiobooks into the Audible app, probably wanting you to buy all your stuff from them and under their control. I’d actually be more willing to use Audible if it let me import the many audiobooks I already own from other sources. Most Fire Phone audio apps are useless for audiobooks since they do not let you browse the audiobook’s files, instead treating the audiobook as a song album and and making a complete mess out of the order of the tracks. Another issue with music players is playback speed. I usually like to listen to audiobooks at double speed (and usually more if I am able to fully give the book my attention), but most music players I’ve tried on the Fire Phone do not have a playback speed feature.

I was almost losing hope that I would be able to get a proper audiobook experience out of the Fire Phone, until I happened on the Rocket Player App, which has almost all the required features for an audiobook player:

  1. It allows you to browse the files on the phone and keep the proper order of the tracks (while others players mess up the track order). If the track order is still messed up in Rocket Player, use a free and open-source Windows program called Mp3BookHelper (Project Page | Download Link) to rename the tracks (both file names and the Title ID3 tag) sequentially.
  2. It has a playback speed setting (after buying the $4 premium version of the app) with pitch correction. The playback speed can only go up to double speed, which is pretty good but I wish it could go up to four.
  3. It remembers your place in the book, even after closing the app (provided that you do not use the app to listen to other things, which is quite doable since there are many other apps optimized for music listening).

Steps for importing your own audiobooks on the Fire Phone and playing them using Rocket Player

  1. Install the free Rocket Player App, then upgrade it to the $4 premium version.
  2. Move your audiobook into a folder on your phone. You can use the USB cable or, if your laptop supports bluetooth, you can use that too, though USB is much faster.
  3. If you used the USB cable, unplug it, otherwise the audio player may not be able to see the new files.
  4. Tap the “Folders” tab in Rocket Player. Browse to the audiobook folder (but don’t go inside the folder). Tap the folder and hold, until a menu comes up. Press “Add to playlist” and create a new playlist. Now you can go to the “Playlists” tab to find the audiobook and play it.
  5. In the Rocket Player settings, you can find the “Playback speed” setting and change it to what you like.

Growth of CPU GFLOPS by year, with future projections

In Q1 2006, the fastest, most expensive CPU could do 12.421 GFLOPS on the Whetstone test. In Q4 2014, the fastest consumer CPU (Intel Core i7-5960X) can do 169.79 GFLOPS.

I added two trend lines to the chart. The green one is a linear trend line, showing that in January 2018 we will have a 200 GFLOPS CPU, which doesn’t sound like much, while the red exponential trend line promises 500 GFLOPS during the same period. The truth will likely be somewhere in between.

The latest CPU’s gains come from its 8 cores, therefore a better performance chart would only show single-thread improvements, since single-thread shows the true performance improvement per core and is a big bottleneck for many games and applications.

A quick single-thread comparison can be done between the Intel Core 2 Extreme X7900 (Q3 2007), which received a single-thread score of 968 on the PassMark test, and the Intel Core i7-2600K (Q1 2011), which received almost exactly double the single-threaded performance at 1943. It took Intel less than 4 years to double the performance of its highest-end consumer CPU. But three years later, the fastest CPU in single-threaded tests is the Intel Core i7-4790K with a score of 2532, meaning that in about four years Intel has only managed to gain a 30% performance improvement in single-threaded applications.

This is a big deal and shows the performance stagnation that gamers and professionals have been complaining about in recent years. If the next four years end up like the past four, in 2018 the fastest consumer CPU will only be 30% faster if no additional cores are added. The interpretation of Moore’s Law that promised a doubling of performance every 18 months has long been inaccurate.

Below is the data that I based the chart on the top on, taken from CPU reviews that featured the SiSoft Sandra Whetstone test.

 Quarter        GFLOPS
 10-Jan-06	12.421  
 10-Apr-06	15.703   
 10-Oct-06	33.797   
 10-Apr-07	37.693   
 10-Jul-07	26.7     
 10-Oct-07	44.4     
 10-Jan-08	44.2     
 10-Oct-08	62.879   
 10-Jan-09	66.5     
 10-Oct-09	55.9     
 10-Oct-10	67       
 10-Apr-11	83       
 10-Jul-11	91       
 10-Oct-11	121      
 10-Jan-12	136      
 10-Apr-12	93.2     
 10-Jul-12	126      
 10-Apr-13	93       
 10-Jul-13	135.4    
 10-Oct-14	169.79

Using one category page to show multiple categories in WordPress

[Update: There is probably never a good reason to do this. Instead, create a new category to hold the posts.]

Trying to show multiple categories in one loop is easily the hardest thing I’ve done in WordPress.

  1. First, create a container category where you want your multiple categories to be shown. Let’s call it the MultiCat category and give it the multicat slug. No posts are required to belong to this category, and if they do, it will have no benefit.
  2. Next, add this bit of code to functions.php of your theme. This is where we create a query variable which enables us to identify the multi-category page properly. Update the category slugs below to match the slugs of the categories you want to show together.
    function multi_cat_handler( $query ) {
        if ( $query->is_main_query() && $query->query["category_name"] == 'cat1-slug,cat2-slug,cat3-slug,cat4-slug' ) {
         $query->set("allish",'yes');
        }
    }
    add_action( 'pre_get_posts', 'multi_cat_handler' );
  3. Next, add this code to functions.php. Update multicat to the slug of your multiple categories category. Also update the other slugs as in the previous step.
    function alter_the_query_for_me( $request ) {
        $dummy_query = new WP_Query(); 
        $dummy_query->parse_query( $request );
    	  if($dummy_query->query['category_name'] == 'multicat') {
    		$request['category_name'] = 'cat1-slug,cat2-slug,cat3-slug,cat4-slug';
    	  }
        return $request;
    }
    add_filter( 'request', 'alter_the_query_for_me' );
  4. To display the h1 tag of the MultiCat category page properly, we use the following code:
    if(get_query_var('allish') == 'yes') {
    echo 'Title of the Multiple Categories Page';
    }
    else {
    echo 'Normal code that outputs category title';
    }

    If you do not do the above, when people go to the MultiCat category page, they will see a random title from one of the multiple categories you want to show on the page, which is not the behavior you want.

  5. Below is the main code that outputs your posts. The if clause at the top allows us to know we are on the multiple categories page (we cannot use other methods such as checking category ID, since that will return a random category’s ID from the multiple categories we want to show).
    
    
    
    
    
    
    
    Here lies the code that outputs your post content
    
    
    
    
    
    Here is the loop that outputs your normal categories
    
    
    

    The $args array contains the query we use to pull posts from the database. We are pulling posts from the categories with the IDs of 3, 4, 671 and 672. Notice that in Step 2 we used category slugs, while in this step we are using category IDs. They have to match, and order may matter.

That’s all.

Caveats

The RSS feed of the category page will be the RSS feed of one of the categories shown on the MultiCat page. This may be fixable through using RSS-specific filters, but in my case I had no need for RSS and did not try to find a fix.

How to moderate bbPress submissions that contain links

The most common trait of forum spam submissions is that they contain links. The code below (add it to your main wordpress install’s functions.php theme file) filters new bbPress topics and replies and if it detects a link, it marks the submission as “pending”, allowing moderators to review the submission in the back end before publishing it. The code is working on bbPress version 2.5.4.

The code, however, creates front end issues. If it is a new topic, the user is redirected to a page that contains the topic title but not the topic content. If it is a new reply, the page reloads with no indication of that the reply has been saved. These issues may be solvable with query variables and some jQuery, but in my case, almost all submissions that contain links are guaranteed to be spam, therefore user experience is not a big concern.

function bb_filter_handler($data , $postarr) {
    
   
   //If the post date and post_modified are the same, it is a new reply/topic. But if they are different,
   //it is a moderater editing the reply/topic (such as changing from pending to published status, 
   //therefore we let the data through without filtering. Without this admins/moderators won't be able to
   //change a reply/topic from "pending" status to "published".
if(  strtotime($data["post_date"]) != strtotime($data["post_modified"]    )  ) {
    
    return $data;
}
    
if(   ($data["post_type"] == 'reply' || $data["post_type"] == 'topic') && $data["post_status"] == 'publish'    ) {  

        $text= $data["post_content"];
        
        
        $regex = "((https?|ftp)\:\/\/)?"; // SCHEME 
        $regex .= "([a-z0-9+!*(),;?&=\$_.-]+(\:[a-z0-9+!*(),;?&=\$_.-]+)?@)?"; // User and Pass 
        $regex .= "([a-z0-9-.]*)\.([a-z]{2,3})"; // Host or IP 
        $regex .= "(\:[0-9]{2,5})?"; // Port 
        $regex .= "(\/([a-z0-9+\$_-]\.?)+)*\/?"; // Path 
        $regex .= "(\?[a-z+&\$_.-][a-z0-9;:@&%=+\/\$_.-]*)?"; // GET Query 
        $regex .= "(#[a-z_.-][a-z0-9+\$_.-]*)?"; // Anchor 
        
        
        
           if(preg_match("/$regex/", $text))  { 
                   $data["post_status"] = 'pending';
           } else {
                  //do nothing
           }    
    
    
}

 return $data;
 
}
add_filter( 'wp_insert_post_data', 'bb_filter_handler', '99', 2 );

Using jQuery and JSON to recover from a failed TablePress save

I was happily working away on my 700+ row table in TablePress, saving occasionally. Server issues came up and I was prevented from saving for a few hours. Eventually the server was back up again and I wanted to save, but I ran into the dreaded Ajax save failure message.

Even using shift+save did not work, taking me to the silly and useless Are you sure? WordPress page.

Refreshing the page would have meant losing many hours of work. I tried various ideas but all failed. The most desperate idea was to use jQuery to get the values of all the table cells, put them into an array, copy the string of the array, refresh the page, use jQuery to feed the array back into the cells. I tried to do it in Firefox, using the built-in inspector and Firebug, only to be reminded of how much I dislike Firefox’s slow and clunky inspector tools (I was using Firefox since it performs better than Chrome on super-sized web apps like a massive TablePress table).

So I needed a way to move my work to Chrome, but how? I saved the TablePress page as an HTML document on my computer, then opened it in Chrome. Saving the editor as an HTML document causes the values of the input fields to be saved, thus when I opened it in Chrome all the values of the cells where there.

Next, I used a jQuery bookmark to load jQuery on the page in Chrome, then I ran the following two lines in the console:

my_array = [];
$('textarea').each(function(){ my_array.push($(this).val()); });

The above code loads the values of the textboxes into an array. The Chrome console doesn’t have a way of letting you copy an object or array’s source code so that you can paste it somewhere else, therefore we have to improvise. We know that the console will print out the value of any object, and if it is a string, it will plainly print the string.

In the above example, we place the word “hello” in the variable x, then on the next line simply write the name of the variable and press enter, causing chrome to give us the string “hello”. As seen below, if type the name of an array variable, Chrome enables us to browse the values inside the array. This is usually helpful, but not this time, since we need the array in format that can be copied.

What we need is to stringify the array somehow. In this case, the JavaScript JSON API comes to the rescue. We place the array my_array inside the my_string variable using the line below:

var my_string = JSON.stringify(my_array);

Afterwards, we type my_string into the console, causing Chrome to show the plaintext version of the array:

We then copy the entire text (making sure to skip the beginning and end quotes added by the stringify function, since we won’t be needing them), then open the TablePress backend on a new tab, loading the table we were working on. The table will lack the cells we had added but could not save. Now we populate this working backend with the data we copied. We open the console, re-enable jQuery using the bookmark, and use the following line to load the text into an array. We do not have a need to use the JSON API’s parse function, since the plaintext is already a valid array initialization.

Below we see the array my_array, ready to be populated with the string we copied:

Next, we use the line below to add the values of the array into the table:

$('textarea').each(function(){ $(this).val(my_array.shift());   });

All done! In the first .each function above, we used my_array.push() to add values to the end of the array. To keep the values in order, we now use my_array_.shift(), getting items from the beginning of the array and feeding them to the textareas from first to last.

In this way I managed to get my work back. Another solution I could have tried would have been to see if WordPress could be forced to accept the data that it was rejecting (it was rejecting it due to an expired session or something like that). But such a solution may have required a lot more work and possibly modifications to the WordPress core, which is always risky and not fun.

Page 19 of 20
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20