Resolved Bug

This bug has been marked as resolved.

Ghost execution of queuable Jobs

Permalink 1 0 Browser Info Environment
I started out with a more complex scenario because I was trying to get FlexJob Scheduler to run queued jobs including a backup voodoo queued job. However, after bashing my head against this issue for a while, I took both of them out of the equation. While both those addons are installed, both were disabled for the test below.

I also have Mnkras file hasher job installed as part of the test.

To repeat (I hope)
1. From the dashboard, run File Hasher.
2. From the dashboard, run Index Search Engine - All
3. Note the URLs to run Index Search Engine All remotely
4. In a new browser, enter the run_single URL for Index Search Engine All
5. In another new browser, enter the URL to check_queue.
6. Reload that browser to run a few check_queue ticks.

Watch as the json returned show that both Index Search Engine All and File Hasher get run, even though only Index Search Engine All has actually been started.

It appears that either steps of both jobs are run when only one job has been initialised, or maybe it is the 'finish' method of both that is being endlessly repeated each time check_queue is run.

Even starting a different browser (so no possibility of cache), in response to the URLhttp://localhost/cms/development5621/index.php/tools/required/jobs/... I just continue to get:
{"error":0,"result":"Hashes updated. 41 files hashed with md5.","jDateLastRun":"October 01, 2013 at 12:59:42 AM","totalItems":0}{"error":0,"result":"Index updated. 239 pages indexed.","jDateLastRun":"October 01, 2013 at 12:59:42 AM","totalItems":0}

I have also tried putting random dummy parameters on the end of the url to make sure nothing caches:

I also get the same result the other way round, after a run_single of file hasher.

Status: Resolved
mkly replied on at Permalink Reply
I'm taking a bit of a day off today, but could you check this?

If you change that to
$list = Job::getList(true);

Just a quick look but I wonder if that is it?

Best Wishes,
JohntheFish replied on at Permalink Reply
I got all excited about such an easy solution :) But unfortunately I then found it doesn't solve the problem.

With Job::getList(true), the $list returned is always empty.

However, if on the jobs page I select the radio option to run internally every day, Job::getList(true) returns a list of those scheduled to run internally.

So Job::getList(true) appears to be returning a list of jobs selected to be run internally and nothing to do with jobs scheduled by run_single.

On the way I found some other bugs.

The field used to enter an interval number for internally run jobs is not protected against illegal input.

A loading issue when queueable jobs are run at the same or overlapping times.
JohntheFish replied on at Permalink Reply
Thinking more about this and the other issue with multiple slices, the solution may be a metea queue, an ordered queue of queueable jobs with methods to get_top and clear_top

So check_queue would:
job = get_top // just gets it, does not remove it!
run a step of job 
if (job is finished){

Adding to the meta queue would need to check for duplicate additions, in case run_single for a job was called again before a previous run had cleared.
JohntheFish replied on at Permalink Reply
I am now pretty sure it is just the finish() method of all queueable jobs that is getting called if there is no job step currently on the queue.

This is also resulting in a profusion of entries into the Jobs log for each time check_queue is run and over-writing of job results on the jobs dashboard page.
JohntheFish replied on at Permalink Reply
Are there any updates on how the ghost execution problem and multiple slices problem ( ) could be solved?
mkly replied on at Permalink Reply
Oh, did I happen to miss a Pull Request for this? I have been a bit busy the last week or so.

Best Wishes,
JohntheFish replied on at Permalink Reply
No pull request yet. I thought this and the other referenced bug needed some strategy discussion first because at the moment I can only guess what the requirement is.

If the code to fix is non trivial, then I don't want to spend time on something that would not get approval.

Also, does the interface from queueable jobs need to remain as-is, or could a solution require a different interface? Not saying that would be the solution, just that I need some idea of parameters.

For example, one solution to both bugs would be a single queue that all queueable jobs placed their slices on and either a means of determining which was the first/last slice or pseudo-slices as first/last markers in the queue. But would that be acceptable?
mkly replied on at Permalink Reply
In general, I'm very much a fan of code as the proposed solution to the problem. This is a bit different that concrete5 stuff in the past, but my style is that long discussions typically take longer than just writing some code to solve the issue. The code doesn't always need to be "shrink wrap" ready, just a quick sketch can sometimes be enough.

That said, to be honest, I just haven't had the minutes to investigate this beyond your message. Hopefully, I have a bit more time later this week.

Thanks for all the investigation into this issue.

Best Wishes,
JohntheFish replied on at Permalink Reply
Im not in a rush and have enough to keep busy with myself. My only constraint is to have a solution in place as part of the next core release.
keeasti replied on at Permalink Reply
Running the URL gives me the following
{"results":[{"error":0,"result":"Indexing complete. Index is up to date","jDateLastRun":"November 16, 2013 at 12:08:25 AM"},{"error":0,"result":"http:\/\/www.***domain***\/sitemap.xml file saved (34 pages).","jDateLastRun":"November 16, 2013 at 12:08:27 AM"},{"error":0,"result":"The Job was run successfully.","jDateLastRun":"November 16, 2013 at 12:08:27 AM"},{"error":0,"result":"0 versions deleted from 3 pages (126,128,129)","jDateLastRun":"November 16, 2013 at 12:08:27 AM"}]}
JohntheFish replied on at Permalink Reply

concrete5 Environment Information

# concrete5 Version

# concrete5 Packages
Avatar (0.9.3), Backup Voodoo (, Blocks by AJAX (1.1.2), Bounce Box (, Box Grabber (1.3), eCommerce (2.8.8), Enlil Transparent Content (0.9.5), File Hasher (., FlexJob Scheduler (, Formidable (1.1.4), Formidable Importer (0.9.3), Formigo Containers (1.0.4), Front End Uploader (2.0), Image List Templates (1.0.1), IRC Chat (0.9.1), Last Updated (1.4), List files from set (1.0.4), Magic Data (2.3.2), Magic Data Commerce (1.1), Magic Data Developer (0.8.2), Magic Data On Block Load (1.0.2), Magic Data Symbols1 (1.9), Magic Data Templates1 (1.1), Magic Heading (1.3.1), Magic Tipple (1.3.1), Magic Yogurt (1.0), NET Bible (1.1.1), Page List Teasers (1.3), PC5 Custom Templates (3.1.2), Quick Param View (, Reward (1.4.3), Search Block Templates (1.0), Similar Products (1.9.1), Sorcerer's Map (0.9), Tokens (0.9.3), Universal Content Puller (1.3), Universal Content Puller Formigo Containers Wrapper (1.3), Universal Content Puller Sources1 (1.3), Universal Content Puller Sources2 (1.3), Upload and Install Packages (, Where Is My Block? (1.1).

# concrete5 Overrides

# Server Software
Apache/1.3.11 (Win32)

# Server API

# PHP Version

# PHP Extensions
bcmath, bz2, calendar, cgi-fcgi, com_dotnet, ctype, curl, date, dom, exif, filter, ftp, gd, gettext, gmp, hash, iconv, imap, json, libxml, mbstring, mcrypt, mhash, mime_magic, mysql, mysqli, odbc, openssl, pcre, PDO, pdo_mysql, PDO_ODBC, pdo_sqlite, pgsql, Reflection, session, SimpleXML, soap, sockets, SPL, standard, tidy, tokenizer, wddx, xml, xmlreader, xmlrpc, xmlwriter, zip, zlib.

# PHP Settings
max_execution_time - 30
log_errors_max_len - 1024
max_file_uploads - 20
max_input_nesting_level - 64
max_input_time - 60
memory_limit - 128M
post_max_size - 8M
safe_mode - Off
safe_mode_exec_dir - <i>no value</i>
safe_mode_gid - Off
safe_mode_include_dir - <i>no value</i>
sql.safe_mode - Off
upload_max_filesize - 2M
mysql.max_links - Unlimited
mysql.max_persistent - Unlimited
mysqli.max_links - Unlimited
odbc.max_links - Unlimited
odbc.max_persistent - Unlimited
pcre.backtrack_limit - 100000
pcre.recursion_limit - 100000
pgsql.max_links - Unlimited
pgsql.max_persistent - Unlimited
session.cache_limiter - nocache
session.gc_maxlifetime - 7200
soap.wsdl_cache_limit - 5
safe_mode_allowed_env_vars - PHP_
safe_mode_protected_env_vars - LD_LIBRARY_PATH

Browser User-Agent String

Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.76 Safari/537.36